Hello, I'm using mediapipe + default VRM. You also can add additional stuffs to the face tracking graph since it also outputting value for mouthsmileleft+right (for example).
Ah I forgot that it's like VNyan rather than VSeeFace when it comes to coded blendshape mapping. Then do disregard that message I said that default VRM can't do it cause you can do it.
Hi, I'm having trouble triggering the detetion & I am using a VRM model that has the expressions built in. (or at least I think they are) I made the model in vroid stuido & was using it in Vsee face flawlessly.
ruclips.net/video/blIU1OLBLN4/видео.html Also 3D calls them blendshapes (Parameters are usually used for Unity anim files for transition) while Live2D uses the term parameters (And blendshapes depending on set up).
@@uo2265 I don't do program comparisons as I do not wanna cause bias opinions or to insult a developer's work. Just use what you prefer if it works for you.
@@Kana_Fuyuko i just set it up and its not working, please help T-T but when I set the condition to < it start to always smile (for the fun expression)
@@uo2265 If you have a Vroid model, you can either purchase the Hana Tool plugin or if you prefer free then try NEB Tools for Vroid. If it's not a Vroid model and you made the model in Blender then you can rig from scratch here: ruclips.net/video/KvkRkXcpKn4/видео.htmlsi=rJ-KS4_46MXT-Wso
Hello, I'm using mediapipe + default VRM. You also can add additional stuffs to the face tracking graph since it also outputting value for mouthsmileleft+right (for example).
Ah I forgot that it's like VNyan rather than VSeeFace when it comes to coded blendshape mapping. Then do disregard that message I said that default VRM can't do it cause you can do it.
@@Kana_Fuyuko Yes, it doesn't neglect the fact that it is a bit limited/restricted though.
thank u for the tutorial was really helpful
Hi, I'm having trouble triggering the detetion & I am using a VRM model that has the expressions built in. (or at least I think they are) I made the model in vroid stuido & was using it in Vsee face flawlessly.
In the condition you need to "Use BlendShape Value From Face Tracking" (choose "yes"), and then select your tracker. It should work !
Hi hello! Do you know if you can put the mouth parameters to react to desktop audio?
ruclips.net/video/blIU1OLBLN4/видео.html
Also 3D calls them blendshapes (Parameters are usually used for Unity anim files for transition) while Live2D uses the term parameters (And blendshapes depending on set up).
Is this better than Vseeface?
@@uo2265 I don't do program comparisons as I do not wanna cause bias opinions or to insult a developer's work. Just use what you prefer if it works for you.
@@Kana_Fuyuko i just set it up and its not working, please help T-T but when I set the condition to < it start to always smile (for the fun expression)
@uo2265 It is recommended the model has ARKit blendshapes so that way it is easier to debug and test the expression detection.
@@Kana_Fuyuko i try finding your tutorial for this but hasn't find it -;-
@@uo2265 If you have a Vroid model, you can either purchase the Hana Tool plugin or if you prefer free then try NEB Tools for Vroid. If it's not a Vroid model and you made the model in Blender then you can rig from scratch here: ruclips.net/video/KvkRkXcpKn4/видео.htmlsi=rJ-KS4_46MXT-Wso