Unity ML-Agents 1.0 - Training your first A.I

Поделиться
HTML-код
  • Опубликовано: 6 ноя 2024

Комментарии • 224

  • @SebastianSchuchmannAI
    @SebastianSchuchmannAI  4 года назад +14

    Hello everybody! I have created a Discord Channel for everybody wanting to learn ML-Agents. It's a place where we can help each other out, ask questions, share ideas, and so on. You can join here: discord.gg/wDPWsQT

    • @soareverix
      @soareverix 4 года назад +1

      Does the server still exist?

    • @mithunmmurthy1473
      @mithunmmurthy1473 4 года назад

      @@soareverix yup

    • @strofll
      @strofll 3 года назад +2

      Can you pay attention to dc server that you are streaming because there is no admin and no order as well it creates complicated situations.

    • @immanuelwallace7095
      @immanuelwallace7095 3 года назад

      instablaster

    • @lucutes2936
      @lucutes2936 Месяц назад

      @@soareverix maybe

  • @youtuber9991
    @youtuber9991 2 года назад +10

    Great intro video! I'd love to see:
    1) How to setup for agents/players with multiple states (switching from stealing to attacking to shooting, etc.)
    2) How to have agents/players on the same team and performing the same goals together
    3) How to apply these different states to AI agents in gameplay

  • @SebastianSchuchmannAI
    @SebastianSchuchmannAI  4 года назад +32

    Note for everybody watching: Two days ago ML-Agents Release 2 was released. Don't worry, the latest release just contained bug fixes, meaning you can still follow the tutorial without doing anything differently. The naming may be a bit confusing because Release 2 sounds like a big thing but it isn't, they just changed their naming scheme. I would always recommend using the latest release version! Enjoy! :)

    • @Daniel_WR_Hart
      @Daniel_WR_Hart 3 года назад

      Oh man, Release 2 was a year ago, but it's still the latest verified version that can be downloaded from the package manger

    • @bazzel1059
      @bazzel1059 Год назад

      Were there any further releases? Some may consider it cringe, but I'm asking ChatGPT to clarify certain methods that I can't understand from the documentation and it told me that the newest version of ML Agents takes ActionBuffers parameters, instead of float [ ] parameters. This could obviously just be ChatGPT being wrong, but this video and the tutorial I am following are 2 years old after all.

  • @viveknegi4243
    @viveknegi4243 4 года назад +9

    The video was amazingly well done! To anyone following this on a windows 10 machine. Make sure you have python 64-bit installed and not 32. And the command that works for me is this:
    "python ml-agents/mlagents/trainers/learn.py --run-id=MyFirstAI". You first need to cd into the ml-agents repo just like the video says and then type in this command to start training. If you still get issues, you can use Anaconda to create a virtual env but Unity themselves removed that dependency.

    • @Cazametroides
      @Cazametroides 4 года назад

      Still not working for me, I get no message, I'm simply prompted to enter another command. I'm using the ml-agents0-release_6, do you think that's the issue?

  • @jorgebarroso2496
    @jorgebarroso2496 4 года назад +3

    For everyone having problems importing ml agents folder into the unity project:
    1. If you get errors like: "The type or namespace "mlagents" could not be found": reimport the mlagents package or import it if you hadnt already done it.
    2. If you get errors like: "The type or namespace "mlagents.actuators" could not be found", after already having imported Ml Agents package through the package manager: try importing it manually from the github cloned folder (machine learning release 8) /com.unity.mllagents/package.json

    • @lpthurler
      @lpthurler 4 года назад

      Jorge Barroso, i solved the second problem just importing the ml agents 1.5.0 - preview version.

    • @jorgebarroso2496
      @jorgebarroso2496 4 года назад

      @@lpthurler Yeah that is was I was meaning, for some reason I couldnt install the 1.5.0 version in the package manager and installed it manually. I didnt know it was only me.

    • @gagamagaj2136
      @gagamagaj2136 4 года назад

      which unity version did you have?

    • @jorgebarroso2496
      @jorgebarroso2496 4 года назад

      @@gagamagaj2136 2019.4.3f1

    • @gagamagaj2136
      @gagamagaj2136 4 года назад

      Alternative Solution: if you have the ml-agents package from the package manager (which is currently a bit older than the latest release: 1.0.5) - than you could just download an older release from github with the older version (in my case it was 1.0.2) and use its examples folder (projects/assets/mlagents) - helped me with the „2.“ issue

  • @simeonstoyanov5226
    @simeonstoyanov5226 3 года назад +4

    Hey,
    Great tutorial! I'm still new to both Unity and ML Agents, but just finished training my first agents on the 3D ball game.
    Just want to point out, at the time of watching the video, the ML Agents repository was on release 12 and I had an extra problem in addition to the "trainer_config" problem (thanks DeJMan).
    I'll do my best to describe it below:
    I was getting 254 errors - most of them were "CS0234" - essentially the C# script couldn't recognize any of the ML Agents related stuff (I think - still new so not 100% sure).
    The solution that worked for me was to go to Package Manager, click on the + in the top left corner, and select "Add Package from disk", then go to the ML Agents downloaded repository downloaded from the github link in the description, then to the folder "com.unity.ml-agents", and add the file "package.json".
    Then, do the above again but for the folder "com.unity.ml-agents.extensions" - again the "package.json" file.
    This solved it for me. Hope it helps someone else.

    • @pixelpowergo1607
      @pixelpowergo1607 3 года назад

      You are an absolute legend! how did you figure that out? it cured that problem I was having. Hopefully this comment I'm writing boosts this comment up in the list so more people can see it. Thank You!

    • @simeonstoyanov5226
      @simeonstoyanov5226 3 года назад

      @@pixelpowergo1607 Thanks! Glad it helped someone :)

  • @pushkar260
    @pushkar260 3 года назад +1

    Assing rewards @3:06 nice.

  • @UN0sebastian
    @UN0sebastian 4 года назад +4

    Thank you for the video, it is awesome! :)
    I loved the effort you placed in attaching memorable graphics to explaining the vocabulary and fields used

  • @JS-ir7wh
    @JS-ir7wh 4 года назад +1

    This video was a good start for me. I got the demos working... Now the hard/fun part. Thank you.

  • @secCheGuevara
    @secCheGuevara 3 года назад +1

    I can only reiterate the same feeling as everyone else here aroud! Thank you so much for those videos! They're great and so are you!

  • @AntonKozikowski
    @AntonKozikowski 3 года назад

    keep it up Sebastian! this is the next chapter my friend! cheers from New Mexico!

  • @KishorDeshmukh
    @KishorDeshmukh 4 года назад +4

    Same here, really can't wait for more!! You are amazing! Your videos are amazing!👏

  • @trungkiennguyen6955
    @trungkiennguyen6955 4 года назад +13

    Your video is amazing, that really just saved me 170 years of learning bro. Great work!!

  • @crassflam8830
    @crassflam8830 4 года назад +4

    Hi Sebastian! Just wanted to share a very slight correction regarding stacked observations:
    You sated that the stacked observation setting determines how many vector observations are gathered before sending them off to the agent; this is close but slightly inaccurate. It will still gather a new vector observation every time a decision is requested, and it will simply add the new vector to the running "stacked" history, while bumping the oldest observation out of the stacked vector array. For example, if you have stacked observations set to two and you are sending only a velocity reading to the agent, on the first decision the stacked vector might be (.567, 0f), on the second decision, the stacked observation might be (0.897f, 0.567f), the third could be (0.924f, 0.897f), and so on...

    • @SebastianSchuchmannAI
      @SebastianSchuchmannAI  4 года назад +1

      Hey! Thanks for the correction, I wish I could change the Video 😄

    • @crassflam8830
      @crassflam8830 4 года назад

      @@SebastianSchuchmannAI It's a really well made video lol

    • @crassflam8830
      @crassflam8830 4 года назад +1

      @@SebastianSchuchmannAI I think there may be some ways to do live editing or re-uploading. (people often need to change videos to make corrections or remove copyrighted content). Def look into it when you get a chance! (there's also the popup box option). Not a really big deal though; that one error is the only imperfection in an otherwise perfect video!

    • @DeJMan
      @DeJMan 4 года назад

      Does this mean if I only pass in positions then velocity won't be calculated?

  • @adeelfarzandali
    @adeelfarzandali 4 года назад

    I have never seen such in depth explanation anywhere. Great.

  • @paperlessmove
    @paperlessmove 2 года назад

    Nice explanation! Thanks 🙌

  • @yanndetaf5725
    @yanndetaf5725 4 года назад

    You are really amazing....
    That was a friendly introduction to unity ml-agent.
    You explain things in a very simple way. That makes it easy to understand clearly the concept behind ml agent.
    Great work guy

  • @ReinkeDK
    @ReinkeDK 4 года назад

    Hi Sebastian
    Really cool video, just started to play with ML-Agents myself and have trained all of the example ones.
    But never dig into how it work, but your video explained it nicely.
    You have gotten 1 new subscriber :)

  • @djsmellhodet
    @djsmellhodet 3 года назад

    You are the only one really explaining ml agents, thanks a lot my game needed you ;)

  • @monkeyrobotsinc.9875
    @monkeyrobotsinc.9875 3 года назад +2

    Assing rewards is my favorite type of reward.

  • @mahdiyanoori2700
    @mahdiyanoori2700 2 года назад

    This video is just so awesammmmmmeee thankyou

  • @hotakutsuki
    @hotakutsuki 3 года назад

    this video was Amazing!, im watching it for the 3th time... and now about to start the traning

  • @connorkoury5434
    @connorkoury5434 4 года назад +4

    What a clean video, can’t wait for more

  • @herbschilling2215
    @herbschilling2215 4 года назад

    Nicely done. I look forward to more videos on ML-Agents

  • @JelleVermandere
    @JelleVermandere 4 года назад +2

    Very interesting, definitely something I will mess with in the future!

  • @timbon
    @timbon 4 года назад

    You definitely deserve more subscribers, my dude. This was a fantastic tutorial!

    • @SebastianSchuchmannAI
      @SebastianSchuchmannAI  4 года назад

      Thanks ❤️

    • @maoryatskan6346
      @maoryatskan6346 4 года назад +2

      Nice tutorial but didn't work for me on Windows machine.
      Please make a tutorial how to install properly on windows machine.

  • @VatSri
    @VatSri 5 месяцев назад

    You just earned a new subscriber.

  • @pastuh
    @pastuh 10 месяцев назад +1

    When we can expect updated tutorial? :)

  • @vaibhavshukla7769
    @vaibhavshukla7769 2 года назад +1

    great info

  • @sebastiansilva6132
    @sebastiansilva6132 4 года назад

    Man, this is huge. Thanks so much

  • @limbenny22
    @limbenny22 4 года назад

    This video is incredibly helpful!! You are amazing!

  • @AnoTheRock69
    @AnoTheRock69 4 года назад

    Thank you! It is a very informative video regarding on Unity ML-agent

  • @soareverix
    @soareverix 3 года назад

    If you want to run with a specific config file, use this command: mlagents-learn config/ppo/CustomConfigNameHere.yaml --run-id=MyFirstAI
    This will let you change the number of max steps so that your model can continue to learn indefinitely and you can tweak the other values to figure out which ones work best.
    If you want to continue training a specific model, you can do mlagents-learn --run-id=MyFirstAI --resume. This will let you pause your model and later get back to training.
    (Please note that on my computer, it writes out two dashes as -- instead of - -. Make sure to use two dashes for these commands!)

  • @lindsay5985
    @lindsay5985 2 года назад +2

    Anyone else get a stack of compiler errors at Step 5: "Assets/ML-Agents/Examples/PushBlockWithInput/Scripts/PushBlockWithInputPlayerController.cs(109,31): error CS0246: The type or namespace name 'IInputActionCollection2' could not be found (are you missing a using directive or an assembly reference?)" ?

  • @bombbomb5554
    @bombbomb5554 3 года назад

    THANK YOU SO MUCH FOR THIS VIDEO! it helped me so much

  • @viverohessmiguel9172
    @viverohessmiguel9172 3 года назад +3

    When I install the package of the ml agents I start to appear a lot of errors related to the fact that the 'Actuators' does not exist and the same with the 'ActionBuffers'
    Does anyone know how to solve this problem?

  • @pulkitmidha5710
    @pulkitmidha5710 2 года назад

    Great video, thanks

  • @layrik-7951
    @layrik-7951 4 года назад +2

    ML-agent 7 has a code problem Assets\ml-agents-release_7\com.unity.ml-agents\Runtime\Grpc\CommunicatorObjects\UnityInput.cs(134,28): error CS0115: 'UnityInputProto.ToString()': no suitable method found to overrid

  • @owengillett5806
    @owengillett5806 4 года назад

    Very useful and informative, thank you

  • @siddharthpreetham1408
    @siddharthpreetham1408 4 года назад

    Amazing video! Thanks a lot!

  • @franziskaneu5915
    @franziskaneu5915 4 года назад

    Nice video Seppi!

  • @maloxi1472
    @maloxi1472 4 года назад

    Amazing ! I would like more videos about the Machine Learning Framework, please !

  • @karimedx
    @karimedx 4 года назад +1

    thank you ... Ii really like the ML Agents project.. I hope you post more tutorials on it

  • @berkertopaloglu911
    @berkertopaloglu911 4 года назад

    You are great guy keep up the good work

  • @carny666
    @carny666 4 года назад +4

    This is great. I'd love to see you start a new simple scene using an ML agent, code and all.

  • @harrivayrynen
    @harrivayrynen 4 года назад

    Thanx for version 1.0 video. It would be nice to see video where you build one example from scratch. And also have time to tell about settings and parameters in Unity. And use graphs to check out how learning is working when you change settings.

  • @comet-tech
    @comet-tech 4 года назад

    Great Video!! Awesome!

  • @DroneMesh
    @DroneMesh 4 года назад

    Keep going.. I left for a a bit also and came back found out everything has changed alot. Your vids where the refresh I needed... Maybe we can collaborate sometime :)

    • @SebastianSchuchmannAI
      @SebastianSchuchmannAI  4 года назад

      Thank you very much. You have a great channel, I can probably learn a lot from you. Always open to collaboration!!

  • @jorgebarroso2496
    @jorgebarroso2496 4 года назад

    For everyone having problems installing ml agent via Python:
    1. Check that you are using Python x64 bits
    2. Remember to run it on a Shell like Windows powershell, windows cmd or Anaconda instead of directly on the python exe. If you see this ">>>", you are inside the python interpreter, write quit() or pres Ctrl-Z + return to exit to the shell. If you quit the program when doing that you were not on a shell.
    3. If you are using Python 3.9 or higher (it probably gives you ERROR:Exit status 1...) try installing python 3.8 and ensure you reasign the paths for python or delete Python 3.9 so it automatically uses python 3.8.
    4. Try updating pip with "python -m pip install -U pip" in windows or "pip install -U pip" in Linux or MacOs.

  • @northstar7978
    @northstar7978 4 года назад

    very good video, here is a random comment to help you build your channel. good luck

  • @THELOBABAH
    @THELOBABAH 3 года назад

    great! thank you so much

  • @Charan_Vendra
    @Charan_Vendra 3 года назад

    Great Video

  • @muyibolanleaghedo8184
    @muyibolanleaghedo8184 3 года назад +15

    Literally 6 months later you cant even run the project anymore... all there is is compiler errors cause Unity updated everything...

    • @diyordev35
      @diyordev35 3 года назад

      do you know any fixes?ive been trying to fix it for 2 days

  • @benevolenttumour
    @benevolenttumour 4 года назад +1

    Great video! I have been looking for an up to date intro to Unity's ML-Agents and this was perfect. Only issue is the second command in the description isn't what you types in the video. Looking forward to more!

  • @TheStrokeForge
    @TheStrokeForge 4 года назад

    Really cool!!!

  • @carmineferrara388
    @carmineferrara388 4 года назад +1

    Thank you, this type of video are so usefull for me, I'm going to start a ML trainership next month. Can you make a video about the most important categories of Machine Learning and how to use them within ML Agents?

  • @psyphy
    @psyphy 4 года назад +1

    Great video man. With the help of this, I trained an AI to balance a pole in 3D. I even made a video. Can I ask you a doubt? How do you record videos from Unity Editor? (I used an asset from the Asset store called Video capture. Do you use the same?)

    • @thebloocat
      @thebloocat 4 года назад

      There's a built in record feature if you're using Windows using the game window Win+G

  • @amarustudios9737
    @amarustudios9737 Год назад +1

    I was really into this and super eager to subscribe but this 12 minute video just didnt cut it for me because it really all just felt like a high level overview with basic explanations which is not what I was expecting to learn when I clicked on a video called "Training your first A.I" since the video didnt walk me through actually creating a new agent and behavior and then training the AI to do that task as would be assumed from the context of the video. When you upload a more complete tutorial I will like, subscribe, hit the bell icon and tell all my friends to do the same.

    • @theashbot4097
      @theashbot4097 Год назад

      this tutorial has more info on how to do that. ruclips.net/video/RANRz9oyzko/видео.html&t. I made it a month ago so everything is up to date.

  • @AbhishekVerma13
    @AbhishekVerma13 4 года назад

    Sebastian, Great video. I followed the steps and everything works except last mlagent-learn command. Still figuring out. You were little fast and You step 5, Step 6 on screen display was overlapping your mouse clicks. but I could follow pausing and replaying .. so not a big deal. Keep up the good work

  • @mohamadsenpai9030
    @mohamadsenpai9030 4 года назад

    This is great!

  • @MinhNguyen-vl7jj
    @MinhNguyen-vl7jj 4 года назад

    fantastic tutorial!!

  • @rickybloss8537
    @rickybloss8537 3 года назад +1

    Why doesn't pip3 install mlagents work on my pc is it because it's windows?

  • @Kugelschrei
    @Kugelschrei 4 года назад

    Hey! I am about to start working on my Game Design bachelor thesis. ML is pretty much the only part of programming I haven't really explored a lot, though I have an understanding of the concepts behind it (Hidden layers, bias and weights, fitness function...), which is why I want to explore ML with my bachelor thesis, in the context of game design.
    I haven't really settled on a topic, though I'd love to turn the training process itself into some sort of game. I got a few ideas regarding this:
    1) Let the user perform the selection which would otherwise happen by evolutionary algorithms (basically chosing AIs to keep for the next iteration, i.e. "manual evolution");
    2) Let the user change the rewards => Turn it into an "AI sandbox".
    I am just diving into the topic and stumbled upon Unity's ML-Agents. Do you know if my two options would be applicable for the ML-Agents framework? My main concern is that I could be locking myself into the framework too much and I lose control (or might need a different training algorithm which is not included), not allowing me to achieve my goal.
    Any thoughts on this? It seems like you have a better understanding than I have on ML-Agents.
    Gutes Video übrigens, direkt nen abo da gelassen :D

    • @SebastianSchuchmannAI
      @SebastianSchuchmannAI  4 года назад

      I think you have valid concern. The problem with having the users change the reward is that the training process is seperate from the engine and therefore seperate from the built application. If you want a nicely packaged application like unity can deliever, only inference is possible right now. You would need to create your own package that somehow includes the python and the unity part, which is probably possible in some way. Additionally, evolutionary algorithms are not included in ML-Agents though implementing one isnt too challenging if you are not striving for maximum efficiency. In summary, I would recommend prototyping with ML-Agents because its easy to work with but be careful not to invest too much.

  • @DeJMan
    @DeJMan 4 года назад

    1. What is the use of Academy.Instance.EnvironmentParameters? Why do we use ResetParameters in the Academy instead of manually putting those values inside OnEpisodeBegin?
    2. How exactly does setting values in the "actionsOut" array in the Heuristic function call the OnActionReceived function with those values? The Heuristic method does not return any values.

  • @Husain8107
    @Husain8107 4 года назад

    Hello Sebastian !
    Greatly appreciated the video.
    For a future video or tutorial series can you please show how we can write the python script that allows us to train the ml-agents ?
    I would prefer if you would cover both the TD3 and PPO algorithm just to see which would work better .
    Congratulations on a job well done with this video !
    Hope to see more !

  • @soareverix
    @soareverix 4 года назад

    I got it working! Quick question though: I've been running the 'walker' scene and I still feel like the agents are unbalanced and could use more training after it automatically stopped. Is there a way I can add to their training time and improve performance?

    • @soareverix
      @soareverix 3 года назад

      (Just had to edit the config file. Got it!)

  • @AcTAL0n
    @AcTAL0n 2 года назад

    Hey Sebastian, sorry for the stupid questions, but did you ever make a video/post about what extensions you use with vs and unity? I'm especially curious about the inline parameter hints, and I couldn't find any help on the web and on your discord. But great video, I enjoyed it a lot :)
    Liebe Grüße aus Berlin ;)

  • @pedrodesanti6266
    @pedrodesanti6266 4 года назад

    This opens so many doors for everybody

  • @DeJMan
    @DeJMan 4 года назад

    You mentioned complex topics that need to be mastered to fully grasp machine learning. Can you list out some terminologies?

    • @SebastianSchuchmannAI
      @SebastianSchuchmannAI  4 года назад

      I was more referring to the ML-Agents Framework but of course it is even more true for Machine Learning in general. I think Hyperparameter Tuning is one of those complex topics, where an understanding of the algorithms (PPO/SAC) helps a lot, as well as curiosity, gail and of course all the basics of Machine learning and reinforcement learning in general. This course by OpenAI is a great resource for that: spinningup.openai.com/en/latest/spinningup/rl_intro.html But I have in no way mastered it, so take my advice with a large grain of Salt.

  • @jn9747
    @jn9747 3 года назад

    Hi,
    Thank you for a great video.
    Can I ask you where I can find the text you show on 8:20?
    All the best

  • @maarten9222
    @maarten9222 4 года назад +1

    when i try to run: mlagents-learn config/trainer_config.yaml --run-id=MyFirstAI it says 'mlagents-learn' is not recognized as an internal or external command,
    operable program or batch file.
    how would i fix this?

    • @UitzUitz
      @UitzUitz 4 года назад

      I'm struggling with this error for over 2h hours now. No solution from Mr Google could help me. Have you figured out a solution for this?

    • @maarten9222
      @maarten9222 4 года назад

      @@UitzUitz nope I gave up

    • @UitzUitz
      @UitzUitz 4 года назад

      @@maarten9222 This helped me get it running forum.unity.com/threads/mlagents-learn-is-not-recognized-as-an-internal-or-external-command-operable-program-or-batch-fil.909716/
      Maybe it also helps you.

  • @SwiftDeveloperWorld
    @SwiftDeveloperWorld 4 года назад

    I'm new in Unity, I am wondering to work for developing ML models with Tensorflow, Keras, But for commercial purposes I have to use unity, but I'm a little confused which one is better to work? Which one is much commercial?? Unity or directly work with Python AI frameworks like Tensorflow and keras????

  • @SmartKeyboard2011
    @SmartKeyboard2011 4 года назад

    great video! liked and subscribed,
    but in macosx terminal, after install Python3.6.8,
    running mlagents-learn
    error message: command not found,
    please help

  • @pratyushbarikdev
    @pratyushbarikdev 3 года назад

    Awesome.

  • @alexanderf1598
    @alexanderf1598 3 года назад

    I tried a bunch of package versions (including preview versions), both through importing directly from the package manager and manually importing the corresponding .json file. However, I still cannot run the 3D ball demo due to the following compiler error: Assets\ML-Agents\Examples\Match3\Scripts\Match3Agent.cs(5,22): error CS0234: The type or namespace name 'Extensions' does not exist in the namespace 'Unity.MLAgents' (are you missing an assembly reference?).
    Any ideas what could be wrong/what else to try?

    • @commonbloodysense
      @commonbloodysense 3 года назад +2

      (using cloned repository) You now need to install the mlagents package and the extensions package separately. From Unity package manager, click + (top left of window), then select add package from disk, navigate to \ml-agents-release_12\com.unity.ml-agents.extensions and select the package.json

  • @justserv
    @justserv 4 года назад

    This was so cool! I want something a little different. I want to make an AI character who can look at a scene, then "close it's eyes" and try to recreate the scene it saw from memory. Is mlagent capable of something like that? Or is it only capable of moving characters around?

    • @SebastianSchuchmannAI
      @SebastianSchuchmannAI  4 года назад

      Sounds fun! It might be possible to do with ML-Agents, it's hard to tell without having tried it. My gut feeling says this sounds more like an Autoencoder problem, it's a type of neural architecture that might be suited for this kind of task. I would say in general that it's less of a Reinforcement Learning problem, so I would look into PyTorch or Tensorflow to implement an Autoencoder. Good luck!

    • @justserv
      @justserv 4 года назад

      @@SebastianSchuchmannAI Awesome thank you! I'm new to the AI world and your tutorial was really easy to follow. Keep up the great work!

  • @hhaimetuber9722
    @hhaimetuber9722 3 года назад

    what ok now i know why ive been seeing sebastions all around in among us and paper.io

  • @TechnicalBurhan
    @TechnicalBurhan 3 года назад

    Do we also need pytorch ??

  • @adrianbricenoaguilar6701
    @adrianbricenoaguilar6701 4 года назад

    This is very good! Could you maybe do a video on how to make the observation and rewards? I do have experience with NN and ML. It would be super interesting if you could make your own simple AI by setting up your model, for creating one from scratch.

  • @gamedevboy1181
    @gamedevboy1181 3 года назад +3

    9:19 why did unity team wrote gameObject.transform 🤨
    They should just write transform.

  • @jiteshmule9227
    @jiteshmule9227 4 года назад

    Amazing

  • @lindsay5985
    @lindsay5985 2 года назад

    When installing ML Agents in Unity's package manager, if ML Agents cannot be found, try selecting Unity Registry.

  • @YEIYEAH10
    @YEIYEAH10 4 года назад

    Hey Sebastian. Thank you for this video. I would like to ask if there is any way to extract the data that our agent is currently gathering to make data processing. I know that our python API is doing that for us behind the scenes, but in the case that we wanted to code our own algorithms to work with unity that would be helpful.

  • @wiktor3453
    @wiktor3453 4 года назад +1

    If you have an error while installing mlagents in python on windows it may be because you use 32bit python and not 64bit.

  • @j0kerplayz766
    @j0kerplayz766 4 года назад

    The last command 'mlagents-learn config/trainer_config.yaml --run-id=MyFirstAI' says that MLagents is typed wrong or could not be found, any ideas?

    • @DeJMan
      @DeJMan 4 года назад

      try this "mlagents-learn --run-id=MyFirstAI"
      if that doesnt work do this first: "pip3 install --upgrade mlagents"

  • @CraftyClawBoom
    @CraftyClawBoom 3 года назад

    How could you train the AI while in game? My game idea requires the agent to learn from the player. Is this possible?

    • @dhruvagrawal3856
      @dhruvagrawal3856 3 года назад

      @CraftyClawBoom if you want the agent to learn from player dont do reinforcement learning instead learn imitation learning, you can learn it from channel called code monkey but yes you need to know the basics

  • @cyberfox9595
    @cyberfox9595 4 года назад

    I am getting error message module 'torch' has no attribute 'set_num_threads'

  • @fernir8702
    @fernir8702 3 года назад

    Hello, are the commands different for windows and have I to type them in the cmd. Iam realy confused couse the installing dindt fkt. and the console say that there doesnt exist such a command

  • @aigen-journey
    @aigen-journey 4 года назад

    what's the behaviour script/tree GUI at 6:17?
    I've never seen it inside of Unity

    • @DeJMan
      @DeJMan 4 года назад

      Its a third party asset: Behavior Designer

  • @FESP321
    @FESP321 4 года назад

    good stuff

  • @onceappuonatime
    @onceappuonatime 4 года назад

    It looks like this is Reinforcement Learning. I am new to this, please correct me if I am wrong. If it is RL, then where is the State?

    • @SebastianSchuchmannAI
      @SebastianSchuchmannAI  4 года назад

      Yup it's RL. So where's the state? I would say there is the environment state which encapsulates everything about the environment and its internal, private state (The game objects in the scene, their logic and so on). Then there's also the agent state, which is a partial representation of the environment state, so everything the agent observes about the environment via its sensors like raycasts or cameras. Does that make sense?

  • @beingalitaheri
    @beingalitaheri 3 года назад

    Awesome, Thanks.
    FYI I am getting the error right at the end after running the last command, Please help:
    Traceback (most recent call last):
    File "/Library/Frameworks/Python.framework/Versions/3.7/bin/mlagents-learn", line 8, in
    sys.exit(main())
    File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/mlagents/trainers/learn.py", line 250, in main
    run_cli(parse_command_line())
    File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/mlagents/trainers/learn.py", line 49, in parse_command_line
    return RunOptions.from_argparse(args)
    File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/mlagents/trainers/settings.py", line 871, in from_argparse
    key
    mlagents.trainers.exception.TrainerConfigError: The option default was specified in your YAML file, but is invalid.

  • @kittytangsze
    @kittytangsze 4 года назад

    do you have tutorial of installation in Windows 10? I follow the doc instruction but still in vain. the Error is "DLL load failed while importing _pywrap_tensorflow_internal", "Failed to load the native TensorFlow runtime" . Any idea?

    • @alexhammer2802
      @alexhammer2802 4 года назад

      I had the same issue but it was becouse i had a folder named ml-agents-release_2 inside the ml agents folder so i had to do the command "cd desktop/ml-agents-release_2/ml-agents-release_2" to get it to work

    • @kittytangsze
      @kittytangsze 4 года назад

      @@alexhammer2802 I finally know the reason. My CPU is too old that current tensorflow does not support. :(

  • @comdudeskip
    @comdudeskip 4 года назад +2

    @3:19 *Assigning Rewards

  • @billykotsos4642
    @billykotsos4642 4 года назад

    ML in Unity. Things are getting exciting!!!

  • @kharekelas4259
    @kharekelas4259 4 года назад

    Hi, I'm using windows ten, I opened python I've just installed and input the pip3 install mlagents line, it complains invalid syntax. What I got wrong? I'm familiar with unity and C# but never touched python. I can't install anything by following the video, any ideas?

    • @thomasjoseph6893
      @thomasjoseph6893 4 года назад

      The documentation of ML agents is what you need to look at for troubleshooting. For me, things started working once I used a virtual environment, which they explain how to set up.
      Here is the documentation.
      github.com/Unity-Technologies/ml-agents/blob/master/docs/Getting-Started.md

  • @Uebermensch03
    @Uebermensch03 4 года назад

    Thanks for your video. I have some questions about how to install mlagents. You just installed mlagents via pip install mlagents but from what I've seen in another website, it says that 1)install anaconda, 2) set anaconda to fit in python3.6, 3) go to ml-agents folder in Unity SDK and install by 'pip install -e .' . Is your approach is okay?

    • @Uebermensch03
      @Uebermensch03 4 года назад

      oh, I forgot a stage between 2) 3), which is the procedure of creating new env using 'conda activate "~~" '

    • @Uebermensch03
      @Uebermensch03 4 года назад

      I just imitated your approach but my computer shows a a lot of errors.
      C:\Users\ymc\Desktop\ml-agents-release-0.15.1>mlagents-learn config/trainer_config.yaml -run-id=whatever
      Traceback (most recent call last):
      File "c:\users\ymc\anaconda3\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 58, in
      from tensorflow.python.pywrap_tensorflow_internal import *
      File "c:\users\ymc\anaconda3\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 28, in
      _pywrap_tensorflow_internal = swig_import_helper()
      File "c:\users\ymc\anaconda3\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 24, in swig_import_helper
      _mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description)
      File "c:\users\ymc\anaconda3\lib\imp.py", line 243, in load_module
      return load_dynamic(name, filename, file)
      File "c:\users\ymc\anaconda3\lib\imp.py", line 343, in load_dynamic
      return _load(spec)
      ImportError: DLL load failed:
      During handling of the above exception, another exception occurred:
      Traceback (most recent call last):
      File "c:\users\ymc\anaconda3\lib
      unpy.py", line 193, in _run_module_as_main
      "__main__", mod_spec)
      File "c:\users\ymc\anaconda3\lib
      unpy.py", line 85, in _run_code
      exec(code, run_globals)
      File "C:\Users\ymc\anaconda3\Scripts\mlagents-learn.exe\__main__.py", line 4, in
      File "c:\users\ymc\anaconda3\lib\site-packages\mlagents\trainers\learn.py", line 12, in
      from mlagents import tf_utils
      File "c:\users\ymc\anaconda3\lib\site-packages\mlagents\tf_utils\__init__.py", line 1, in
      from mlagents.tf_utils.tf import tf as tf # noqa
      File "c:\users\ymc\anaconda3\lib\site-packages\mlagents\tf_utils\tf.py", line 3, in
      import tensorflow as tf # noqa I201
      File "c:\users\ymc\anaconda3\lib\site-packages\tensorflow\__init__.py", line 41, in
      from tensorflow.python.tools import module_util as _module_util
      File "c:\users\ymc\anaconda3\lib\site-packages\tensorflow\python\__init__.py", line 50, in
      from tensorflow.python import pywrap_tensorflow
      File "c:\users\ymc\anaconda3\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 69, in
      raise ImportError(msg)
      ImportError: Traceback (most recent call last):
      File "c:\users\ymc\anaconda3\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 58, in
      from tensorflow.python.pywrap_tensorflow_internal import *
      File "c:\users\ymc\anaconda3\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 28, in
      _pywrap_tensorflow_internal = swig_import_helper()
      File "c:\users\ymc\anaconda3\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 24, in swig_import_helper
      _mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description)
      File "c:\users\ymc\anaconda3\lib\imp.py", line 243, in load_module
      return load_dynamic(name, filename, file)
      File "c:\users\ymc\anaconda3\lib\imp.py", line 343, in load_dynamic
      return _load(spec)
      ImportError: DLL load failed:
      Failed to load the native TensorFlow runtime.
      See www.tensorflow.org/install/errors
      for some common reasons and solutions. Include the entire stack trace
      above this error message when asking for help.
      why does it give errors like that? Do you happen to have an idea?

    • @Makaniiii
      @Makaniiii 4 года назад

      @@Uebermensch03 I had the same error.
      Dont know about your other problems, but i resolved this with installing another version of tensorflow via pip.
      In the end i used tensoflow 2.0.0

  • @Mehrdad995GTa
    @Mehrdad995GTa 3 года назад

    10:23 That *simple* pip3 install mlagents took me over 2 hours
    because I didn't know that mlagents relly on something named "PEP 517" and "H5PY"
    which them relly on python 3.7, so as me having 3.8.2 it couldn't be installed,
    then I had to use the preinstalled 3.7 in a separate environment setup.
    so, yeah it took over 2 hrs and I'm not sure whether this gonna work at all or not :/

  • @uotsabchakma
    @uotsabchakma 2 года назад +1

    Do I need python???

    • @theashbot4097
      @theashbot4097 Год назад +2

      yes. To make the Agent learn you will need to use python.

  • @timbon
    @timbon 4 года назад

    When I try installing it says, "Config file could not be found in \ml-agents-master\config\trainer_config.yaml" and when I check the folder, there is no file with that name. Did something change in the past month?

    • @DeJMan
      @DeJMan 4 года назад +1

      try this "mlagents-learn --run-id=MyFirstAI"
      if that doesnt work do this first: "pip3 install --upgrade mlagents"

    • @timbon
      @timbon 4 года назад

      @@DeJMan My dude, you're a lifesaver. It worked, thank you so much. Why has it changed?

    • @DeJMan
      @DeJMan 4 года назад

      @@timbon Unity has updated MLAgents such that the singular trainer_config file that used to hold multiple behaviors have now been separated into separate files (so its a different path now). This change has also removed the need to specify a trainer file (it uses a default one).
      Welcome to MLAgents Release 3

    • @timbon
      @timbon 4 года назад

      @@DeJMan well thanks. I'm going to go mention that in the Unity Forum I created