For sure! You could actually set this up to work with Kinect with just a few changes to what we build in this video The Kinect or Kinect Azure CHOP provide CHOP control channels from the Kinect’s body tracking functionality. There are a lot of channels coming from the Kinect, so it's best to use a Select CHOP to isolate the tx/ty channels for the left/right hand that you want to control the effect. Then, if you replace the CHOP channels coming from the Mouse In CHOP with these tx/ty channels, you can use the Kinect to control the effect. You may need to add a Math CHOP after the Select CHOP to adjust the range of the values coming from the Kinect, but otherwise it should work just fine. Hope that helps!
Great tutorial! I'm trying to make the blob appear bigger as you get closer with a Kinect. I have it tied to the z axis but it gets smaller as I get closer. Any tips on how to make it go the other way?
Thanks for your clear turorial! You explained everythings well! But I am wondering why there is a small gap between my actual mouse and circle with 'CHOP mousein', it is so great if you can answer me : )
You could likely do a few things. The first would be to use blob tracking (we have another short video about getting started with that) to pull out some XY positions and use those instead of the mouse data we used here. Another thing you could do is take the depth texture from the Realsense and plug that right into the Lookup TOP first input and use the same colouring system on it.
Hi, I did it, thank you very much! Is there a tutorial that explains how instead of a mouse it can be a hand that moves the visual. that connects with Kinect?
You could actually set this up with just a few changes to what we build in this video! The Kinect or Kinect Azure CHOP provide CHOP control channels from the Kinect’s body tracking functionality. There are a _lot_ of channels coming from the Kinect, so it's best to use a Select CHOP to isolate the tx/ty channels for the left/right hand that you want to control the effect. Then, if you replace the CHOP channels coming from the Mouse In CHOP with these tx/ty channels, you can use the Kinect to control the effect. You may need to add a Math CHOP after the Select CHOP to adjust the range of the values coming from the Kinect, but otherwise it should work just fine. Hope that helps! 🙂 Let us know if you have any trouble setting it up
For sure! You could actually set this up with just a few changes to what we build in this video The Kinect or Kinect Azure CHOP provide CHOP control channels from the Kinect’s body tracking functionality. There are a lot of channels coming from the Kinect, so it's best to use a Select CHOP to isolate the tx/ty channels for the left/right hand that you want to control the effect. Then, if you replace the CHOP channels coming from the Mouse In CHOP with these tx/ty channels, you can use the Kinect to control the effect. You may need to add a Math CHOP after the Select CHOP to adjust the range of the values coming from the Kinect, but otherwise it should work just fine. Hope that helps!
Thanks to the presenter for the clear explanation! Is there a way to use your webcam/video device in to control the painting instead of your mouse? If so how would i go about this?
Thanks, glad to hear it was helpful! If you don't have something like a Kinect available, a simple solution might be to use the Blob Track TOP with a webcam feed. This will track movement in the image and give you position values of the blob as CHOP channels, which you can use in place of the mouse to control the painting. Hope that helps!
Thanks a million for the response! I found this video ruclips.net/video/ZplOrM6G6JI/видео.html, guess I should follow this as a guide. Im a beginner - which TOP would be the best place to connect the blob track to in the painting tutorial project?
Sure! You could add an HSV Adjust TOP after the Ramp TOP to adjust the hue of the ramp texture. Then, for the HSV Adjust’s _Hue Offset_ parameter, you can use a Python expression to calculate the current angle of the mouse from the centre, which would look like: *1 / tan( op(‘null1’)[‘ty’] / op(‘null1’)[‘tx’] )* Hope that helps!
Glad you enjoyed it! Great question, you could achieve this by duplicating all of the TOPs up until around 2:00 in the video, so additional Circle, Constant, and Over TOP. Then in the Noise CHOP, add two additional channels on the channels page (maybe something like tx2 ty2) and make a CHOP reference to the second Over TOP's transform parameters. After this, add a Composite TOP, set the operation to Over, and then connect the two Over TOPs to the Composite TOP's input. You should see two circles moving around in the same texture. Connect this to the Feedback TOP and you've got your effect! Hope that helps :)
Any tips for doing this long & soft trail on the result of the opticalFlow tool? I tried but since opticalFlow doesn't produce a visual that's always on-screen, adding feedback doesn't really do anything, doesn't accumulate previous frames' content into the subsequent comp. Help?
Good question! I've never tried that but it seems like you should still be able to create a feedback loop with that, but what you can also try instead is to use a Cache TOP to hold maybe 5 or 10 frames of data in it. Then use Cache Select TOPs to pull out all the frames in real time, then plug those all into a Composite TOP and set it to Add mode. That should then give you the accumulation of the last 5-10 frames of data. Would something like that work for your idea?
@@TheInteractiveImmersiveHQ Didn't know about the Cache TOPs, will try it out! Being a TD newbie rushing on a project is tough, but this TD is really fun. I'll try out your idea, thank you!
@@jcnesci Sure! If you look for a video on our channel about feedback trails I show you how to use a Cache TOP and you'd do a very similar setup as to that.
Hey, thanks for the video. If I d use a Kinect Top and mount the kinect on the ceiling, could I draw multiple "circles" on the floor with a projection? how would I achieve that, adapting this for "multiple" people? question: how can I generate more of those "brushes" at the same time using Kinect input? greetz
Great question! This definitely sounds doable, you’d need multiple circles (one for each person) whose position is controlled by the location data received from the Kinect for that person. Then, you could run a TOP texture of all the circles composited together into the feedback effect we create in the video and the results should be something like what you’re describing.
To generate multiple circles, you could either use multiple Circle TOPs and composite them together with the Composite TOP, or you could use something like instancing SOP geometry (via the Geometry COMP) to generate a circle for each user the Kinect detects.
@@TheInteractiveImmersiveHQ thanks so much! I am so thankful for this wonderful community. it makes a great start into this world very friendly :) really considering patreon!
One thing to check: have you set the _Target TOP_ parameter of the Feedback TOP to *comp1* ? If the _Target TOP_ parameter isn't set, the network won't generate feedback
what expression we should write if we have multiple values for reference ? suppose we have two different values for transform in x how we write expression for reference for it?
Let’s say you have two separate LFO CHOPs named lfoA and lfoB that you want to use together for the expression, you would write: “op(‘lfoA’)[0] * op(‘lfoB’)[0]” without the double quotes, and make sure to change the parameter mode from Constant to Expression. In those expressions the [0] section is grabbing the first channel of the LFO CHOP, but you could also reference it by name using the format [‘chanName’] instead. Hope that helps!
@@TheInteractiveImmersiveHQ Thanks, I tried this expression also but it simply multiply both values and give one value my main concern is to take both values simultaneously. suppose we have one chop as mosuein and another one is oscin so both have different values at a time so when i use mouse it takes its value but when i use osc it switch to osc in use its value then mouse and vice versa. hope you understand my concern .
Hey, I'm just wondering why you prefer to transform the circle by transforming the composite instead of the x/y on the circle operator itself. performance? or something else.
Usually transforming the position (or making other changes) on the generator TOP itself will take more processing power than doing a similar transformation in a different filter TOP, like the Composite TOP or Transform TOP. Although it's not necessarily going to make that much of a difference in this example, we use the method here to introduce new users to the approach, as well as to illustrate that you can sometimes achieve the same end result in multiple ways. Hope that helps!
Are you looking to export a movie recording of the effect or use the effect as a standalone program? If the latter, TouchDesigner doesn’t have the ability to “export” the programs you create in the traditional sense. TouchDesigner has a counterpart software called TouchPlayer that allows you to run the projects that you create (but not edit them) - this is commonly what we use when creating projects for clients. As for importing 3D geometry, that can be done with a File In SOP. However, that’s going to require a pretty significant reworking of the network, because you’ll need to render the 3D geometry. Check out our beginner series video for more info: Working with SOPs and MATs - TouchDesigner Beginner Crash Course - ruclips.net/video/WgEZgNhYzqY/видео.html
If you use the Kinect or Kinect Azure CHOP, you can get CHOP control channels from the Kinect’s body tracking functionality. If you pick CHOP channels corresponding to the body part you want to track and use those channels place of the Mouse In CHOP, you can use the Kinect to control the effect. You may need to use a Math CHOP to adjust the range of the values coming from the Kinect, but otherwise it should work just fine 🙂
Thank you for uploading! Your teaching is so clear and the best among TouchDesigner educators on RUclips!
Glad to hear you find it helpful! Thanks for watching :)
That was a great simple and well presented tutorial that will suit many applications. Thanxx!
Thanks, Paul! We're glad you enjoyed it :)
What a quality content! I am definitely gonna subscribe for the paid content as soon as i get paid by my agency. 🤪
Our pleasure and thanks for watching! Looking forward to seeing you in The HQ PRO! I think you'll love it :)
Great video thanks! LOVING all the feedback ideas!
Our pleasure! Feedback is a really fun and interesting tool - highly recommend continuing to explore with it! Thanks for watching :)
Such a clear and interesting tutorial 😄 Thanks for sharing
Glad it was helpful! Thanks for watching :)
BRO! this is amazing! thank you so much
No problem! Thanks for watching :)
nice video!! thank you for explaining stuff. do you think i can replace the mouse input with a webcam or kinect input ?
For sure! You could actually set this up to work with Kinect with just a few changes to what we build in this video
The Kinect or Kinect Azure CHOP provide CHOP control channels from the Kinect’s body tracking functionality. There are a lot of channels coming from the Kinect, so it's best to use a Select CHOP to isolate the tx/ty channels for the left/right hand that you want to control the effect.
Then, if you replace the CHOP channels coming from the Mouse In CHOP with these tx/ty channels, you can use the Kinect to control the effect. You may need to add a Math CHOP after the Select CHOP to adjust the range of the values coming from the Kinect, but otherwise it should work just fine. Hope that helps!
Great tutorial! I'm trying to make the blob appear bigger as you get closer with a Kinect. I have it tied to the z axis but it gets smaller as I get closer. Any tips on how to make it go the other way?
The Math CHOP can help with this! Once added to your network, the From Range and To Range parameters to change the input/output range of the signal.
Thanks for your clear turorial! You explained everythings well! But I am wondering why there is a small gap between my actual mouse and circle with 'CHOP mousein', it is so great if you can answer me : )
hi, would there be a way I could do this affect with the Intel Realsense D435i Depth Camera
You could likely do a few things. The first would be to use blob tracking (we have another short video about getting started with that) to pull out some XY positions and use those instead of the mouse data we used here. Another thing you could do is take the depth texture from the Realsense and plug that right into the Lookup TOP first input and use the same colouring system on it.
Hi, I did it, thank you very much! Is there a tutorial that explains how instead of a mouse it can be a hand that moves the visual. that connects with Kinect?
You could actually set this up with just a few changes to what we build in this video!
The Kinect or Kinect Azure CHOP provide CHOP control channels from the Kinect’s body tracking functionality. There are a _lot_ of channels coming from the Kinect, so it's best to use a Select CHOP to isolate the tx/ty channels for the left/right hand that you want to control the effect.
Then, if you replace the CHOP channels coming from the Mouse In CHOP with these tx/ty channels, you can use the Kinect to control the effect. You may need to add a Math CHOP after the Select CHOP to adjust the range of the values coming from the Kinect, but otherwise it should work just fine. Hope that helps! 🙂 Let us know if you have any trouble setting it up
Great video!...can this be done via Kinect on hand gesture?
For sure! You could actually set this up with just a few changes to what we build in this video
The Kinect or Kinect Azure CHOP provide CHOP control channels from the Kinect’s body tracking functionality. There are a lot of channels coming from the Kinect, so it's best to use a Select CHOP to isolate the tx/ty channels for the left/right hand that you want to control the effect.
Then, if you replace the CHOP channels coming from the Mouse In CHOP with these tx/ty channels, you can use the Kinect to control the effect. You may need to add a Math CHOP after the Select CHOP to adjust the range of the values coming from the Kinect, but otherwise it should work just fine. Hope that helps!
@@TheInteractiveImmersiveHQ Thank you for the informative reply!
Thanks to the presenter for the clear explanation! Is there a way to use your webcam/video device in to control the painting instead of your mouse? If so how would i go about this?
Thanks, glad to hear it was helpful! If you don't have something like a Kinect available, a simple solution might be to use the Blob Track TOP with a webcam feed. This will track movement in the image and give you position values of the blob as CHOP channels, which you can use in place of the mouse to control the painting. Hope that helps!
Thanks a million for the response! I found this video ruclips.net/video/ZplOrM6G6JI/видео.html, guess I should follow this as a guide. Im a beginner - which TOP would be the best place to connect the blob track to in the painting tutorial project?
Thank you so much this is so inspiring!
So glad to hear that! Thanks for watching :)
Thanks for the tutorial! Is there away to set the color change by the mouse moving direction? Trying to make it but having a hard time figuring out..
Sure! You could add an HSV Adjust TOP after the Ramp TOP to adjust the hue of the ramp texture. Then, for the HSV Adjust’s _Hue Offset_ parameter, you can use a Python expression to calculate the current angle of the mouse from the centre, which would look like: *1 / tan( op(‘null1’)[‘ty’] / op(‘null1’)[‘tx’] )* Hope that helps!
Excellent tutorial, thank you!
Glad to hear it, thanks!
Hello there! Amazing tutorial! How could I make 2 such animations happen simultaneously on the same canvas?
Glad you enjoyed it! Great question, you could achieve this by duplicating all of the TOPs up until around 2:00 in the video, so additional Circle, Constant, and Over TOP.
Then in the Noise CHOP, add two additional channels on the channels page (maybe something like tx2 ty2) and make a CHOP reference to the second Over TOP's transform parameters.
After this, add a Composite TOP, set the operation to Over, and then connect the two Over TOPs to the Composite TOP's input. You should see two circles moving around in the same texture.
Connect this to the Feedback TOP and you've got your effect! Hope that helps :)
Any tips for doing this long & soft trail on the result of the opticalFlow tool?
I tried but since opticalFlow doesn't produce a visual that's always on-screen, adding feedback doesn't really do anything, doesn't accumulate previous frames' content into the subsequent comp.
Help?
Good question! I've never tried that but it seems like you should still be able to create a feedback loop with that, but what you can also try instead is to use a Cache TOP to hold maybe 5 or 10 frames of data in it. Then use Cache Select TOPs to pull out all the frames in real time, then plug those all into a Composite TOP and set it to Add mode. That should then give you the accumulation of the last 5-10 frames of data. Would something like that work for your idea?
@@TheInteractiveImmersiveHQ Didn't know about the Cache TOPs, will try it out! Being a TD newbie rushing on a project is tough, but this TD is really fun. I'll try out your idea, thank you!
@@jcnesci Sure! If you look for a video on our channel about feedback trails I show you how to use a Cache TOP and you'd do a very similar setup as to that.
at last i understand what is the function of Math CHOP, thank you so much
No problem! It's one of my most used operators, even though it's simple it's so helpful for rescaling value ranges.
Hey, thanks for the video. If I d use a Kinect Top and mount the kinect on the ceiling, could I draw multiple "circles" on the floor with a projection? how would I achieve that, adapting this for "multiple" people? question: how can I generate more of those "brushes" at the same time using Kinect input?
greetz
Great question! This definitely sounds doable, you’d need multiple circles (one for each person) whose position is controlled by the location data received from the Kinect for that person. Then, you could run a TOP texture of all the circles composited together into the feedback effect we create in the video and the results should be something like what you’re describing.
To generate multiple circles, you could either use multiple Circle TOPs and composite them together with the Composite TOP, or you could use something like instancing SOP geometry (via the Geometry COMP) to generate a circle for each user the Kinect detects.
@@TheInteractiveImmersiveHQ thanks so much! I am so thankful for this wonderful community. it makes a great start into this world very friendly :) really considering patreon!
thank you, so helpful to combined with Kinect drawing
Glad it was helpful! Thanks for watching :)
It doesnt work on mine, i dont understand why, i did every step, but on the "feedback loop" thing dont show up the same on my computer...
One thing to check: have you set the _Target TOP_ parameter of the Feedback TOP to *comp1* ? If the _Target TOP_ parameter isn't set, the network won't generate feedback
Thank you. So much info.
Our pleasure :) Glad you're liking our work!
what expression we should write if we have multiple values for reference ? suppose we have two different values for transform in x how we write expression for reference for it?
Let’s say you have two separate LFO CHOPs named lfoA and lfoB that you want to use together for the expression, you would write:
“op(‘lfoA’)[0] * op(‘lfoB’)[0]” without the double quotes, and make sure to change the parameter mode from Constant to Expression. In those expressions the [0] section is grabbing the first channel of the LFO CHOP, but you could also reference it by name using the format [‘chanName’] instead. Hope that helps!
@@TheInteractiveImmersiveHQ Thanks, I tried this expression also but it simply multiply both values and give one value my main concern is to take both values simultaneously. suppose we have one chop as mosuein and another one is oscin so both have different values at a time so when i use mouse it takes its value but when i use osc it switch to osc in use its value then mouse and vice versa. hope you understand my concern .
Hey, I'm just wondering why you prefer to transform the circle by transforming the composite instead of the x/y on the circle operator itself. performance? or something else.
Usually transforming the position (or making other changes) on the generator TOP itself will take more processing power than doing a similar transformation in a different filter TOP, like the Composite TOP or Transform TOP. Although it's not necessarily going to make that much of a difference in this example, we use the method here to introduce new users to the approach, as well as to illustrate that you can sometimes achieve the same end result in multiple ways. Hope that helps!
@@TheInteractiveImmersiveHQ makes sense! thanks for replying.
how would I export this? Or add a 3d model into it
Are you looking to export a movie recording of the effect or use the effect as a standalone program? If the latter, TouchDesigner doesn’t have the ability to “export” the programs you create in the traditional sense. TouchDesigner has a counterpart software called TouchPlayer that allows you to run the projects that you create (but not edit them) - this is commonly what we use when creating projects for clients.
As for importing 3D geometry, that can be done with a File In SOP. However, that’s going to require a pretty significant reworking of the network, because you’ll need to render the 3D geometry. Check out our beginner series video for more info: Working with SOPs and MATs - TouchDesigner Beginner Crash Course - ruclips.net/video/WgEZgNhYzqY/видео.html
How can we connect with kinect instead of mouse 😥
If you use the Kinect or Kinect Azure CHOP, you can get CHOP control channels from the Kinect’s body tracking functionality. If you pick CHOP channels corresponding to the body part you want to track and use those channels place of the Mouse In CHOP, you can use the Kinect to control the effect. You may need to use a Math CHOP to adjust the range of the values coming from the Kinect, but otherwise it should work just fine 🙂
Fancy 🥰
Fancy and easy :) Thanks for watching!
🤩
mobicep heeft me schoolproject gered, thanks papa