I had earlier done this with a python backend for blendshape predictions , but this is the best way to do it. Thank you for the detailed explanation and the code for the same.
Hi @Sarge really cool project. Can you please tell if i want to do full body tracking and then make the avatar do the same. is it possible? what can i do to implement this using your current approach?
Hi Sarge, thank you for this amazing tutorial. Given that Mediapipe includes pose landmark detection capabilities, could we use a similar approach for hand transformations? Any thoughts on this?
Outstanding! Hey, I am really struggling with Unity WebGL trying to stablish a good communication between browser and Ready player me blendshapes... I can't send multiple parameters with unity instance SendMessage("myGameObect", "myMethod", shape.categoryName, shape score). I ve tried with JSON utility, but I am not an expert. Any idea? Thanks!
@@sgt3v Thanks for reply. Actually I am trying to change the facial expressions of my RPM character. But no matter what I do, I am not able to changes its expressions Would you give any suggestion in this regard? I would be really grateful to you
Thank you, your video is great. A problem is it works well with your avater, but not well with my avatar. I create avater with ready-player-me today. Today is one year after your video. Is it because the avater starndard changed? I print 'mesh.morphTargetDictionary[element.categoryName]' on console and 'half' of them is 'undefined'.
I do not think I understand your question. Mediapipe is available in Python, Web and Android, thus I made a web app where result can be accessible by most viewers. You can check Mediapipe documentation for more.
That's great. It's just what was missing in the integration you've set up. I'm already waiting for your next videos, I've seen them all. Thanks a lot
I had earlier done this with a python backend for blendshape predictions , but this is the best way to do it. Thank you for the detailed explanation and the code for the same.
hay I'm looking exactly how to do blendshape predictions in python, could you share how to do?
Your videos are really helpful. Hope you continue making such useful videos😊
Hi Sarge, great stuff.
Any leads/link on how to do for full body, a cheatsheat for mapping the poses to the avatar(parts) will also help
Thanks
hay, great vid although I'd really like a video of how to implement this in python with blendshape predictions. could you make a video on that?
This is really cool. Thank You.
exemple with holistic traking
Hi @Sarge really cool project. Can you please tell if i want to do full body tracking and then make the avatar do the same. is it possible? what can i do to implement this using your current approach?
Is it possible to make the avatar mimic hand signs? Or is it only for the face?
Awesome video! Thanks so much for posting. How could you mirror the webcam stream so it's a mirror image of what the user is doing?
To anyone else looking for a solution: I did it by setting the canvas styling to transform: "scaleX(-1);"
@@gabrieljreed yep, css should do the trick.
Hi Sarge, thank you for this amazing tutorial. Given that Mediapipe includes pose landmark detection capabilities, could we use a similar approach for hand transformations? Any thoughts on this?
sir have you done some work over it?
Outstanding! Hey, I am really struggling with Unity WebGL trying to stablish a good communication between browser and Ready player me blendshapes... I can't send multiple parameters with unity instance SendMessage("myGameObect", "myMethod", shape.categoryName, shape score). I ve tried with JSON utility, but I am not an expert. Any idea? Thanks!
thank you very much! can we use similalr approach for whole upper body
thanks a lot
hi thanks i found it. ❤
Can this function be implemented in ue4c++ and then complete the model in UE4 like AR?
I want to replace my face with a cat face. or with a model I draw in eg.solidworks. Can you give me some advice?
As long as the model has the same bone structure and orientations it should work. You can also get a model from Ready Player Me and modify it.
Soooooooooo....Facerig but open source! Cool.
Thanks Serge, is it possible to integrate with Unity?
Hi Sarge, thank you for the tutorial. I was wondering if this can be achieved in Unity?
I have a video for how this can be done with iPhone however mediapipe is not yet available for Unity.
how can i done this with using mediapipe points?
Can we use AR Kit Blendshapes only for live facial tracking or it can be used for doing something else?
Anything face animation related.
@@sgt3v Thanks for reply. Actually I am trying to change the facial expressions of my RPM character. But no matter what I do, I am not able to changes its expressions
Would you give any suggestion in this regard? I would be really grateful to you
Whenever I select arkit blendshapes, nothing happens. Do I have to write script for it?
You need to request the avatar with ARKit param you can read about all options in the documentation: docs.readyplayer.me/ready-player-me
💔💔 not under stand i need help😢
Thank you, your video is great. A problem is it works well with your avater, but not well with my avatar. I create avater with ready-player-me today. Today is one year after your video. Is it because the avater starndard changed? I print 'mesh.morphTargetDictionary[element.categoryName]' on console and 'half' of them is 'undefined'.
let me try again. maybe I created full body avater, this might be the reason.
ARKit is accessible by iOS users, are there any alternatives available for Windows?
This is about ARKit blendshapes, not ARKit. ARKit blendshapes come with RPM avatars.
@@sgt3v but the ARKit blendshapes not working when selected in the avatar config in unity
Is it necessary to create an app why you cannot use the mesh track to make a video? 😢 well I guess at least we have Chat Gpt , Either case good info
I do not think I understand your question. Mediapipe is available in Python, Web and Android, thus I made a web app where result can be accessible by most viewers. You can check Mediapipe documentation for more.
Can we do the same thing with python ??
yep, mediapipe has python lib as well
@@sgt3v yes but can I do the character animation?
If you could use a 3D renderer in python then yes it should work. In case of web its based on react-three-fiber.
@@sgt3v 🙂
@@__________________________6910
This project might help you: github.com/mmatl/pyrender
Can this be used to combine any chatbot with that avatar?
Yes.
@@sgt3v Do you have any tutorial to combine, for example, wit bot or chatgpt + lip movement?