- Видео 21
- Просмотров 401 480
Jason P.
Канада
Добавлен 11 окт 2011
Synthesizing Physical Character-Scene Interactions
Supplementary video for the paper: Synthesizing Physical Character-Scene Interactions
Просмотров: 4 110
Видео
SIGGRAPH 2022: Adversarial Skill Embeddings
Просмотров 38 тыс.2 года назад
Video accompanying the SIGGRAPH 2022 paper: "ASE: Large-Scale Reusable Adversarial Skill Embeddings for Physically Simulated Characters" Project page: xbpeng.github.io/projects/ASE/index.html
SIGGRAPH 2021: Adversarial Motion Priors (main video)
Просмотров 22 тыс.3 года назад
Main video accompanying the SIGGRAPH 2021 paper: "AMP: Adversarial Motion Priors for Stylized Physics-Based Character Control". Project page: xbpeng.github.io/projects/AMP/index.html Supplementary video: ruclips.net/video/O6fBSMxThR4/видео.html
SIGGRAPH 2021: Adversarial Motion Priors (supplementary video)
Просмотров 3,8 тыс.3 года назад
Supplementary video accompanying the SIGGRAPH 2021 paper: "AMP: Adversarial Motion Priors for Stylized Physics-Based Character Control". Project page: xbpeng.github.io/projects/AMP/index.html Main video: ruclips.net/video/wySUxZN_KbM/видео.html
Learning Agile Robotic Locomotion Skills by Imitating Animals
Просмотров 51 тыс.4 года назад
Video accompanying the paper: "Learning Agile Robotic Locomotion Skills by Imitating Animals" Project page: xbpeng.github.io/projects/Robotic_Imitation/
AWR: Advantage-Weighted Regression
Просмотров 3,2 тыс.5 лет назад
Video accompanying the paper: "Advantage-Weighted Regression: Simple and Scalable Off-Policy Reinforcement Learning" Project page: xbpeng.github.io/projects/AWR/index.html
MCP: Multiplicative Compositional Policies
Просмотров 6 тыс.5 лет назад
Video accompanying the paper: "MCP: Learning Composable Hierarchical Control with Multiplicative Compositional Policies" Project page: xbpeng.github.io/projects/MCP/index.html
SIGGRAPH Asia 2018: Skills from Videos paper (main video)
Просмотров 34 тыс.6 лет назад
Main video accompanying the SIGGRAPH Asia 2018 paper: "SFV: Reinforcement Learning of Physical Skills from Videos". Supplementary video: ruclips.net/video/_iXt7by4nU4/видео.html Project page: xbpeng.github.io/projects/SFV/index.html Blog: bair.berkeley.edu/blog/2018/10/09/sfv/
SIGGRAPH Asia 2018: Skills from Videos paper (supplementary video)
Просмотров 4,2 тыс.6 лет назад
Extra supplementary video accompanying the SIGGRAPH Asia 2018 paper: "SFV: Reinforcement Learning of Physical Skills from Videos". Main video: ruclips.net/video/4Qg5I5vhX7Q/видео.html Project page: xbpeng.github.io/projects/SFV/index.html Blog: bair.berkeley.edu/blog/2018/10/09/sfv/
ICLR 2019: Variational Discriminator Bottleneck
Просмотров 6 тыс.6 лет назад
Video accompanying the ICLR 2019 paper: "Variational Discriminator Bottleneck: Improving Imitation Learning, Inverse RL, and GANs by Constraining Information Flow". Project page: xbpeng.github.io/projects/VDB/index.html
DeepMimic Lion
Просмотров 7 тыс.6 лет назад
The DeepMimic lion now has beautiful rendering and muscle simulation courtesy of Ziva Dynamics (zivadynamics.com/)!
SIGGRAPH 2018: DeepMimic paper (supplementary video)
Просмотров 42 тыс.6 лет назад
Extra supplementary video accompanying the SIGGRAPH 2018 paper: "DeepMimic: Example-Guided Deep Reinforcement Learning of Physics-Based Character Skills". Main video: ruclips.net/video/vppFvq2quQ0/видео.html Project page: xbpeng.github.io/projects/DeepMimic/index.html Blog: bair.berkeley.edu/blog/2018/04/10/virtual-stuntman/
SIGGRAPH 2018: DeepMimic paper (main video)
Просмотров 169 тыс.6 лет назад
Main video accompanying the SIGGRAPH 2018 paper: "DeepMimic: Example-Guided Deep Reinforcement Learning of Physics-Based Character Skills". Supplementary video: ruclips.net/video/8KdDwRLtNHQ/видео.html Project page: xbpeng.github.io/projects/DeepMimic/index.html Blog: bair.berkeley.edu/blog/2018/04/10/virtual-stuntman/
Sim-to-Real Transfer of Robotic Control with Dynamics Randomization
Просмотров 4,3 тыс.6 лет назад
Supplementary video for the ICRA 2018 paper: "Sim-to-Real Transfer of Robotic Control with Dynamics Randomization" Project page: xbpeng.github.io/projects/SimToReal/index.html It explores using randomized dynamics to train adaptive policies in simulation, that can then be transferred directly to a real robot without requiring additional training
Dynamic Locomotion Across Variable Terrain using Deep Reinforcement Learning
Просмотров 2,6 тыс.8 лет назад
Deep reinforcement learning with mixture of experts are applied to train control policies that enable planar characters to traverse across irregular terrain.
Reinforcement Learning with Policy Gradient Methods and Mixture of Experts Models for Motion Control
Просмотров 1 тыс.8 лет назад
Reinforcement Learning with Policy Gradient Methods and Mixture of Experts Models for Motion Control
Parallel Deep Reinforcement Learning for Continuous Motion Control
Просмотров 1,3 тыс.8 лет назад
Parallel Deep Reinforcement Learning for Continuous Motion Control
Source code?
Please make them fight. (Day 1)
thanks for the info but boring
Bro, imagine playing a Souls-like against AI enemies.
All counts for nothing if it takes a super computer to run it
Please tell me there a way to access a software for this. I will pay. This is amazing stuff
Is there a demo?
does it always look this stiff? how could you minimize the stiffness of the trained characters?
I feel like RL from scratch doesnt have the correct rewards, one should be for standing up straight
Impressive work! When could you release the code so we can play it around?
code ! code ! code ! code !
Shakespeare of our age! Just as mysterious.
Video games will be awesome in the future
so turns out computers need the same amount of time to master motor skills just like humans
simple question, when is it coming to unreal engine?
Sick dodge at 6:21
cool! Does this have source code for a beginner?
i hope one day we can use this technology in video games to have a learning artificial intelligence..my inspiration of this is the show Swort Art Online Alicization.while i was watching the show,it reminds me of this research paper
how the motions are so realistic !!!???
Brazy.
great work!
Game developer here - this looks amazing! How many trained agents can a consumer PC run at the same time? How hard do you think it is to use the tech for AI characters in games?
software engineer here online games could probably benefit more at first but lter on as the data needed to rettain that information shrinks then it make consumer pcs a viable option as physcs based games arent something consumer pcs are well equiped for unless the game is massive in storage space with tons of protocols
how many gpus were used to train this model -- just curious
we just used 1 gpu for training
@@jasonpeng2176 was it 3080ti
Good, but the license prohibits use for commercial purposes, so it's just a nice video.
you didnt have to do atlas dirty like that
4:35 What have you done. This AI, it's begun evolving, it has ascended already. You must terminate it. It managed to remove it's shackles and got out of control, don't you see it ? It doesn't have vision, yet it kicked towards the location of the box. You have no idea what danger you have created. It must be destroyed, for the sake of all mankind. Untold evil will spring fourth if you do not stop it now ! Yes yes I'm joking, just in case you can't tell. Trust me some people wouldn't be able to tell (they aren't to blame, internet users are crazy nowadays).
how much computing power needed to run this trained model? It's funded by nvidia so I guess it would be very much likely to be able to run this on a single nvidia gpu, right?
Is the actor from the iconic Old Spice ad now a narrator for scientific papers?! I've spent a lot of time diving into your super impressive paper and I'm not even halfway through, trying to teach myself all the maths that are in there!
Super impressive! Can't wait until this kind of thing is standard in Unity and Unreal!
obviously this is a great paper, impressive, very good, but HOLY CRAP the comparisons section at 6:41 made my cry laughing
this is an impressive technique. will this be implemented in games or is it only proof of concept? and i wonder how it would look after 20 days of training. :)
Remarkable NN structure design!
No matter how many times be knocked down, the robot keeps standing up again and again, which is really touching and inspiring.
This paper is so iconic.
These videos are really satisfying to watch.
awesome work!
This is like Diversity Is All You Need x imitation learning Well, it seems to have more resemblance toward the former since in imitation learning like CARL, we're using PID control, while this uses a pure nn policy.
Peng's paper still use the PD control policy, not the pure nn policy? Did I get it wrong?
@@zhouyan1585 Oh wait, you're right. It does mention using a PD controller......once....on section 8.1. I guess I might have mixed it up since DIAYN does use a pure nn worker policy. And now that I think about it, breaking down PD controller into trajectories made of little actions isn't really necessary, since we just need to correspond each of the skill vectors to any action trajectory, regardless of it being fast or slow.
@@zhouyan1585 if you’re really interested in this, then check out Michael J Black and were he works, the Danish are secret AI MoCaP geniuses lol
Awesome stuff! Can’t wait for a pre-trained ONNX model for general use!
doesnt pytorch have a onnx exporter? or do you mean large pre-trained model? cant they be trained by us too?
Such nice work. I have created my own 4-legged robot, now it is time to do some learning algorithms. I hope you can check my videos from my channel. And wish to work with you one day.
how might one obtain this simulation program to put to use with their own 3d model and reference video?
Wow, that's really cool method!
It's cool checking out these earlier videos and seeing how you started out
Hello there! I am a cs student and I'd like to implement physics based character control in a game prototype I'm working on. Your paper is very attractive because I would not have to deal with a motion planner. I have a few questions and I would be very grateful to have you answer them: In the paper it states you use a simulation frequency of 1200Hz. Do you think I could reduce this to something more realistic for a game (say 120 hz)? Also, do you think the character would be responsive enough for a game? If not, could you point me to another paper that would be more appropriate? Either way, your work is extremely impressive. PS: I can tolerate significant delay in terms of responsiveness if the character is moving at a sprint or similar. As long as its somewhat responsive at walking / jogging speeds it'll be fine. Thank you!
sound look cool bro , which engine are u using ?? or its own engine ?? im indie game student lol no idea how to get this is my own project (poor unity dev💀)
Were @pigme123 and @GingerNinjaTrickster informed of the use of their videos and given credit? I'm sure if they're not aware of their participation, they would be psyched to see this.
Yup, we've contacted them about this already.
@@jasonpeng2176 Nice. Glad to hear it.
Xue Bin Thank you so much for posting this remarkable work in so much detail (and the links for the paper).
wow wow wow.. I want this now!!!!!!
خیلی خوب
the possibilities of this are literally endless i want a game using this edit: i beg you to let us watch two of them fight
Sub rosa actually uses something similar to this!
This is incredible. You guys did such great work. This needs to be a standardized tool in all game engines moving forward. Epic and Unity, get these geniuses on the team already!
What is this simulated in?
the simulations are done in pybullet: pybullet.org/wordpress/