I like how you say something complicated will be easy, and then actually make it easy. Note: with a new update, it is required to turn Legacy transform on in the final step, setting the rotation on the Over Chop.
We’re glad to hear it, thanks for the feedback :) And great catch! Another option is to omit the “math4” math CHOP from your network. For anyone who’s interested, in version 2022.24200 the following change was made: "TOPs with transform parameters to be consistent in direction with the Transform TOP. Added a toggle parameter for 'legacy' behavior."
This means that in versions 2022.24200, if you are using the Math CHOP to re-range the rotation value which you send to the Over TOP, by default the texture will rotate in the opposite direction of what you see in the video. If Legacy Transform is turned on, it will behave as it does in the video.
Hello~ Elburz, I have a question: how many face can detect at the same time? If another face appear, would the facial point annotation affect and the tracking failed?
@@TheInteractiveImmersiveHQ www.astesj.com/publications/ASTESJ_050504.pdf this is how to train a model. But i guess if we use something like weakinator than we can process chop data externally use the Algorithmus than have a trigger like in osc or any kind of procotol back to TD , www.wekinator.org never menage to make it work due to bad osc messages. TD was not able to read it, 4vvv yes i did not get why. Another approce will be use TD Neuron(another things i never got it completly) , than make the whole train and in TD. Mmm sounds really hard work!!!!
Hello! Thank you for the video. I am preparing for an exhibition in a university club. That's why I'm going to ask you for advice. The theme is horror. We want to make a mirror using a touch designer and a Kinect and make a scene where the face and mirror of the person reflected on it are slightly broken. How to make a mirror seems to be very helpful in this video. However, I don't know what to do with the mirror breaking production. I need your advice, please
Yes! You could use the same CHOP channels to control the positioning of a Geometry COMP. The only snag you might run into is the alignment with the webcam feed, since the rendered view of the 3D geometry will depend on the position of the Geometry COMP as well as the position of the Camera COMP.
@@TheInteractiveImmersiveHQ ok, im trying to set thug life joint to the corner of the mouth. But it's place the center of joint to existing point on the face. If i move it by math chop (Mult-add) then its good on X axis but it deviate when i move my head up and down.
@@madabron I'd recommend making sure all your transformation operators are set to not use center alignment and instead to use alignment that best matches the object you're lining up, so in this case it might be top left or top right corner. Then you should be able to get a point near the corner of the mouth, it looks like landmark 61 and 65 are the inner corners of both sides of the mouth. Do you have that setup so far?
Thanks a lot mate for this tutorial! Glad to learn this great faster/easier way to do face tracking instead of using my custom and much slower version using OpenCV :)
Hi, this tutorial covers how to use the Face Track CHOP: *RTX Face Tracking in TouchDesigner* - ruclips.net/video/cOOP2PtRv-U/видео.html . Hope that helps!
Unfortunately there's no pipeline for taking TouchDesigner projects to Instagram. That said, you could likely create a similar effect with Meta Spark. We have a number of tutorials for that platform if you're interested: ruclips.net/p/PLpuCjVEMQha-xKMJjXgNoj-Uff8J_zDVe
Hi I really hope you can help me with this problem asap: my render top isn't displaying the geometry. I tried troubleshooting on google and I found that I can see my geometry in the render window when my render mode is cube map or fish eye but not when it is 2d and unfortunately i cant continue because the camera comp wont pick that up.
Do you have materials applied? Can you try adding a Light COMP and seeing if its a lighting issue? Also maybe turn off all the expressions on the geometry, maybe the geometry is leaving the cameras field of view. Are you on macOS or on PC?
@@TheInteractiveImmersiveHQ I've encountered the same issue. Tried your suggestions here.. on a pc. I can see the tracking np in the geo1, added material constant, etc. I will keep trying but if you have a clue what might be going on- your help is greatly appreciated.
@@Nanotopia Haha glad you figured it out! Geometry scenes can definitely be a bit tricky in TouchDesigner as the viewers aren't as easily usable as something like Blender or C4D, so having a bit of troubleshooting steps in your mind like trying to make sure geometry is in the camera view, or double checking lighting and materials is always a good thing even I still do regularly :)
Definitely! You'll have to play with the network a bit, as the data coming in will likely be different. That said, ARKit provides a bunch of face-related info for making effects like these. Here are some ARKit tutorials to get you started! *iPhone Sensors & Depth Camera in TouchDesigner Tutorial - Part 1!* ruclips.net/video/pwwuZj8KK6M/видео.html *iPhone ARkit & Depth Sensor in TouchDesigner Tutorial - Part 2!* ruclips.net/video/dfKfVJfy7SI/видео.html
@@TheInteractiveImmersiveHQ I'm new to TD and for the moment I'm only trying to track 3D objects to some points in the face. I also would like to replace or modify the default 3D head with something else to make an original 3D face filter. Tell me if you have any cues or resources! Thank you very much
I like how you say something complicated will be easy, and then actually make it easy.
Note: with a new update, it is required to turn Legacy transform on in the final step, setting the rotation on the Over Chop.
We’re glad to hear it, thanks for the feedback :) And great catch! Another option is to omit the “math4” math CHOP from your network.
For anyone who’s interested, in version 2022.24200 the following change was made: "TOPs with transform parameters to be consistent in direction with the Transform TOP. Added a toggle parameter for 'legacy' behavior."
This means that in versions 2022.24200, if you are using the Math CHOP to re-range the rotation value which you send to the Over TOP, by default the texture will rotate in the opposite direction of what you see in the video. If Legacy Transform is turned on, it will behave as it does in the video.
Great video, Thank you Elburz, can you make a tutorial about using open_CV to track hand and body?
Hello~ Elburz, I have a question: how many face can detect at the same time?
If another face appear, would the facial point annotation affect and the tracking failed?
Curious. How do we develop this work side by side with the prebuilt dataset? For example - the emotion detector library from the 68 mark-up points.
That's a good question. Do you have a link for the emotion detector library that I can look at?
@@TheInteractiveImmersiveHQ www.astesj.com/publications/ASTESJ_050504.pdf this is how to train a model. But i guess if we use something like weakinator than we can process chop data externally use the Algorithmus than have a trigger like in osc or any kind of procotol back to TD , www.wekinator.org never menage to make it work due to bad osc messages. TD was not able to read it, 4vvv yes i did not get why. Another approce will be use TD Neuron(another things i never got it completly) , than make the whole train and in TD. Mmm sounds really hard work!!!!
Thanks!!This is exactly what I was looking for!😍🥰
You're welcome :) Glad it was helpful!
Hello! Thank you for the video.
I am preparing for an exhibition in a university club. That's why I'm going to ask you for advice. The theme is horror. We want to make a mirror using a touch designer and a Kinect and make a scene where the face and mirror of the person reflected on it are slightly broken. How to make a mirror seems to be very helpful in this video. However, I don't know what to do with the mirror breaking production. I need your advice, please
can you use this same method but have it as a 3d object follow your face?
Yes! You could use the same CHOP channels to control the positioning of a Geometry COMP. The only snag you might run into is the alignment with the webcam feed, since the rendered view of the 3D geometry will depend on the position of the Geometry COMP as well as the position of the Camera COMP.
Thanks a lot! Looks great! But what should i do if i need to match not center but for example left upper corner of my picture with some face landmark?
Can you tell me more about what you're trying to do?
@@TheInteractiveImmersiveHQ ok, im trying to set thug life joint to the corner of the mouth. But it's place the center of joint to existing point on the face. If i move it by math chop (Mult-add) then its good on X axis but it deviate when i move my head up and down.
@@madabron I'd recommend making sure all your transformation operators are set to not use center alignment and instead to use alignment that best matches the object you're lining up, so in this case it might be top left or top right corner. Then you should be able to get a point near the corner of the mouth, it looks like landmark 61 and 65 are the inner corners of both sides of the mouth. Do you have that setup so far?
@@TheInteractiveImmersiveHQ I guessed to arrange the joint in the way I need using CROP top. It works fine! thanks!
Thanks a lot mate for this tutorial! Glad to learn this great faster/easier way to do face tracking instead of using my custom and much slower version using OpenCV :)
No problem! Glad you enjoyed it. It definitely is making things a lot easier on the development side with these RTX features.
hi! I couldn't find the tutorial you mentioned about face track SOP, I really need it. Can you show me the way?thank u!
Hi, this tutorial covers how to use the Face Track CHOP: *RTX Face Tracking in TouchDesigner* - ruclips.net/video/cOOP2PtRv-U/видео.html . Hope that helps!
Hey just wondering how I could make this a Instagram filter
Unfortunately there's no pipeline for taking TouchDesigner projects to Instagram. That said, you could likely create a similar effect with Meta Spark. We have a number of tutorials for that platform if you're interested: ruclips.net/p/PLpuCjVEMQha-xKMJjXgNoj-Uff8J_zDVe
This is exactly what I was looking for! 🔥🔥🔥🔥🔥 You rule!
Thanks! It was a lot of fun and surprisingly easy to setup!
Hi I really hope you can help me with this problem asap: my render top isn't displaying the geometry. I tried troubleshooting on google and I found that I can see my geometry in the render window when my render mode is cube map or fish eye but not when it is 2d and unfortunately i cant continue because the camera comp wont pick that up.
Do you have materials applied? Can you try adding a Light COMP and seeing if its a lighting issue? Also maybe turn off all the expressions on the geometry, maybe the geometry is leaving the cameras field of view. Are you on macOS or on PC?
@@TheInteractiveImmersiveHQ I've encountered the same issue. Tried your suggestions here.. on a pc. I can see the tracking np in the geo1, added material constant, etc. I will keep trying but if you have a clue what might be going on- your help is greatly appreciated.
never mind! figured out the issue- just ridiculously far away in the camera! 😅
@@Nanotopia Haha glad you figured it out! Geometry scenes can definitely be a bit tricky in TouchDesigner as the viewers aren't as easily usable as something like Blender or C4D, so having a bit of troubleshooting steps in your mind like trying to make sure geometry is in the camera view, or double checking lighting and materials is always a good thing even I still do regularly :)
What about tracking two or more faces?
We'll be revisiting more Nvidia AR features shortly, so I can definitely tackle this soon :)
Can I do that with ARKit? Thanks!
Definitely! You'll have to play with the network a bit, as the data coming in will likely be different. That said, ARKit provides a bunch of face-related info for making effects like these. Here are some ARKit tutorials to get you started!
*iPhone Sensors & Depth Camera in TouchDesigner Tutorial - Part 1!*
ruclips.net/video/pwwuZj8KK6M/видео.html
*iPhone ARkit & Depth Sensor in TouchDesigner Tutorial - Part 2!*
ruclips.net/video/dfKfVJfy7SI/видео.html
@@TheInteractiveImmersiveHQ Thanks!!! I've already seen that tutorials, those are amazings!
very very useful thanks! wild I thought you would have used a 3D model for this!
It's cool how far all this tech has come! We'll be diving into the 3D model side of things in a few weeks :)
@@TheInteractiveImmersiveHQ Thank you so much! I would be very interested in learning face tracking with 3D objects. Is the tutorial exists yet?
@@venusgrivegnee2330 Can you tell me more about what you're trying to do? I can then see what some good resources might be.
@@TheInteractiveImmersiveHQ I'm new to TD and for the moment I'm only trying to track 3D objects to some points in the face. I also would like to replace or modify the default 3D head with something else to make an original 3D face filter. Tell me if you have any cues or resources! Thank you very much
You are super.
Thanks :)