Hey! following the same steps, when I right click my kinect top to view as points, all the points inside the top goes mad and the image just randomly shifts around and the same issue is with the geo. sop. What could be the issue
Before you go into view as points mode, do you textures look similar to mine or are the flat textures also going wild? What kind of GPU do you have as well?
@@TheInteractiveImmersiveHQ I'm getting the same issue. There is a red x in the upper right corner of my math and null operators and the picture in geo is the depth image but its moving around like crazy. I have a 3090, so it shouldn't be the card. but it is the free version of TD, so maybe it has something to do with that?
@@kblinse06 oh great! Yes with any kind of particle system or point clouds (basically anything that isn't a static model) I recommend turning off adaptive homing. Some folks like it, but I prefer it off generally, and I set it to default off in the main TouchDesigner application preferences.
It absolutely is. A few of the operators might be slightly different, but as long as you have a point cloud texture (which the Kinect 2 does) you can do a similar process for sure.
Absolutely can be done with commercial. It would even work on the free learning version as well, since the resolution of the point clouds is under 1280x720.
With the Kinect 2, you can just add additional Kinect TOPs to the network and set the Image parameter to the one that you need. Depending on the setup that you're working with, you might need to use the Camera Remap parameter to align depth image textures with color camera textures.
Great meeting you at LDI! Do you think it possible to output two offset versions of this for left and right eye and send the two images through NDI or whatever into Unity and send each to corresponding eye in connected HMD for 3D viewing? Could do a similar effect with shader graph in Unity natively, but thinking of other possibilities with more efficient processing in TD…
Hi Matt! Our pleasure :) What you could do is setup the instancing setup like this, then create two Render TOPs and two Camera COMPs, then you have move the cameras slightly off from each other which would give you your left/right eye renders that you can NDI over into Unity. Could you give something like that a try?
@@TheInteractiveImmersiveHQ yah I think that was the idea, thanks for the wise/affirmative nod! Looking forward to playing with iPhone LiDAR too from your other vids.
This is super cool! I have my point cloud looking awesome, but I was wondering what all I could do with it from here. I'd like to add some cool effects to it, but I'm unsure of where to start. I was thinking about adding a delay to where the squares have a little bit of lag getting to the points as it captures movement, any ideas on how I could make this happen?
Thanks! 😀 You might try looking into something like the Cache TOP or Time Machine TOP for adding time delay effects to the point cloud TOP texture (before it's used in the Geometry COMP). Also check out our video Generative Point Clouds in TouchDesigner (ruclips.net/video/__dHYGe9bQs/видео.html) for some additional inspiration/a look at some techniques for processing the data. Hope that helps!
I haven't done that personally but you could try pointing a Movie File In TOP at the mkv file and seeing if it loads it up. If not, you might need to do an intermediary step of converting the MKV file to another format that supports 32-bit depth, is lossless, and can be read by TouchDesigner (like an exr sequence or similar).
First, you'd need to add a rendering pipeline, which consists of a Camera COMP, Light COMP, some kind of material (a Phong MAT might work here) and a Render TOP. You'll need to make sure that you've assigned the MAT to the Geometry COMP, by dragging the MAT onto the Geo COMP's _Material_ parameter on the Render page. Once you've done this, you'll have a rendered view of the network output as a texture in the Render TOP, which you can then save/add post effects to/whatever else you might want to do with it. To save the texture to a file, you can use the Movie File Out TOP. Set the _Type_ parameter to Image, and then pick the file type you want via the _Image File Type_ parameter. To repeatedly save images after an interval of time, you can use the Timer CHOP. On the Timer page of the Timer CHOP's parameters, set the _Length_ parameter to the number of seconds you want the interval to be, and then turn the _Cycle_ parameter on and the _Cycle Limit_ parameter off. Finally, on the outputs page, turn _Cycle Pulse_ on. Then, make a CHOP reference from the newly added cycles_pulse channel to the _Add Frame_ button within the Movie File Out TOP, and it will save the image every time the timer finishes the particular interval! If the _Unique Suffix_ parameter in the Movie File Out TOP is turned off, the file will be overwritten each time, as the file name will stay the same. Hope that helps!
Nope, because what the Kinect is doing is actually giving you all of the 3D information of the scene and then using that data to put little boxes all over the 3D environment and then colour them using it's normal RGB camera. A webcam on it's own only contains RGB information, but it doesn't have any 3D data streams in it.
Sure, you'd follow the same approach described in the video, but instead of using the Kinect Azure/Kinect Azure Select TOPs, you can use the Kinect TOP and set the Image parameter to Depth Point Cloud or Color Point Cloud. Everything else should be the same from there. Hope that helps!
Great question! If you're using the Azure Kinect, you can connect as many to your computer as you have bandwidth for. If you're using the Kinect v2, however, you're limited to just one per computer. Hope that helps!
You can, but you'll have to do the sensor fusion manually, unfortunately. Unless you find another app that does the combination for you or you do your best to line up the point clouds using something like the Point Transform TOP to manually orient everything together.
This is dependent on a lot of factors, including location, experience, whether full-time or freelance, etc. It's worth checking out our blog (interactiveimmersive.io/blog/) for some materials on the topic or joining the Interactive & Immersive HQ PRO (interactiveimmersive.io/lp/hq-pro-full-trial/) to get assistance from industry pros to help you find the appropriate range
Could you show us how to do close loop kinect scanning if it is possible in Touchdesigner, like in this video ; Large-scale real-time mapping example using Azure Kinect thx
Actually I would not recommend buying a kinect Azure for anyone, I'm surprised nobody points out that it has terrible latency, actually worst than previous version, it is so bad it defeats the purpose of it, I sold it after a week. Maybe if they fix SDK it will be usable. serious rip off if you ask me.
Yup! A few operators might be slightly different, but as long as you get the point cloud texture from the Kinect 2, you can use almost the same setup as here.
@@tiporight Inside of The HQ PRO we have a full course about using Kinect 2 and the different ways you can use all the data (similar to this video). I'd recommend giving the free trial a try, I think you'll really enjoy it: interactiveimmersive.io/lp/hq-pro-full-trial/
Great question! To save the point cloud to a file, you can use the Movie File Out TOP. The Movie File Out TOP allows you to save .exr files using the OpenEXR format. To do this, connect the null1 TOP to a Movie File Out TOP. Then, in the Movie File Out set the Type parameter to Image, and the Image File Type parameter to OpenEXR. On the EXR parameter page, turn on the “Save As Point Cloud” setting. Below that, you have the ability to choose how you want to save the data from the null1 TOP. In the default configuration, you’ll only get the point positions and colors found within null1. However, you can also save the colors from null2 into the same .exr file by clicking the small “+” icon below the alpha parameter, which allows you to add additional data from a separate TOP. Drag the null2 TOP into the Additional Input TOP 1 parameter, and then make sure to rename the RGBA channels to something else, because channels with the same name will overwrite each other. I usually rename the first set of channels (directly below the “Save As Point Cloud” switch) to X, Y and Z, as I use them for position data, and then use the channels below Additional Input TOP 1 as R, G, B, and A, as I use them for colour. Back on the Move Out page, make sure you've set the file name you want to use (under the File parameter), and then click Add Frame to save the file. If you want to save a sequence of images instead, you can change the Type parameter from Image to Image Sequence, and then turn the Record switch on (it'll record an .exr for every frame until you turn the switch off). Hope that helps!
Hey Elburz, this is a great tour through the Azure .
Thanks Greg! We'll have a bunch more Azure pieces now that I finally got my hands on one :)
Hey! following the same steps, when I right click my kinect top to view as points, all the points inside the top goes mad and the image just randomly shifts around and the same issue is with the geo. sop. What could be the issue
Before you go into view as points mode, do you textures look similar to mine or are the flat textures also going wild? What kind of GPU do you have as well?
@@TheInteractiveImmersiveHQ I'm getting the same issue. There is a red x in the upper right corner of my math and null operators and the picture in geo is the depth image but its moving around like crazy. I have a 3090, so it shouldn't be the card. but it is the free version of TD, so maybe it has something to do with that?
@@TheInteractiveImmersiveHQ Solved my problem by turning 'adaptive homing' off
@@kblinse06 oh great! Yes with any kind of particle system or point clouds (basically anything that isn't a static model) I recommend turning off adaptive homing. Some folks like it, but I prefer it off generally, and I set it to default off in the main TouchDesigner application preferences.
@@TheInteractiveImmersiveHQ Thank you for this! Solved my issue :)
Subscribed! Thanks so much for this, I'm looking forward to the videos coming out and I'll def check your program out!
Great! Our pleasure to dive into a topic you're interested in! More coming soon :)
Is this possible with the Intel Realsense? Or the old Kinect?
It absolutely is. A few of the operators might be slightly different, but as long as you have a point cloud texture (which the Kinect 2 does) you can do a similar process for sure.
Tell us how to use Azure Kinect and calibrate the camera to project a Kinect image!))
We'll add it to our list of content to make. Thanks for the suggestion!
Very cool effect! - Question...does this require TD-Pro...or can this be done with Commercial?
Absolutely can be done with commercial. It would even work on the free learning version as well, since the resolution of the point clouds is under 1280x720.
Could I ask how you'd go about rigging up the color with just a regular old kinect 2? There isn't a konect2 select top...
With the Kinect 2, you can just add additional Kinect TOPs to the network and set the Image parameter to the one that you need. Depending on the setup that you're working with, you might need to use the Camera Remap parameter to align depth image textures with color camera textures.
Great meeting you at LDI! Do you think it possible to output two offset versions of this for left and right eye and send the two images through NDI or whatever into Unity and send each to corresponding eye in connected HMD for 3D viewing? Could do a similar effect with shader graph in Unity natively, but thinking of other possibilities with more efficient processing in TD…
Hi Matt! Our pleasure :) What you could do is setup the instancing setup like this, then create two Render TOPs and two Camera COMPs, then you have move the cameras slightly off from each other which would give you your left/right eye renders that you can NDI over into Unity. Could you give something like that a try?
@@TheInteractiveImmersiveHQ yah I think that was the idea, thanks for the wise/affirmative nod! Looking forward to playing with iPhone LiDAR too from your other vids.
This is super cool! I have my point cloud looking awesome, but I was wondering what all I could do with it from here. I'd like to add some cool effects to it, but I'm unsure of where to start. I was thinking about adding a delay to where the squares have a little bit of lag getting to the points as it captures movement, any ideas on how I could make this happen?
Thanks! 😀 You might try looking into something like the Cache TOP or Time Machine TOP for adding time delay effects to the point cloud TOP texture (before it's used in the Geometry COMP). Also check out our video Generative Point Clouds in TouchDesigner (ruclips.net/video/__dHYGe9bQs/видео.html) for some additional inspiration/a look at some techniques for processing the data. Hope that helps!
@@TheInteractiveImmersiveHQ Ooooh thanks so much!!
Great video! Is there any way to import an mkv file from the Kinect DK Recorder and use that for point cloud data in place of a live azure? Thanks!
I haven't done that personally but you could try pointing a Movie File In TOP at the mkv file and seeing if it loads it up. If not, you might need to do an intermediary step of converting the MKV file to another format that supports 32-bit depth, is lossless, and can be read by TouchDesigner (like an exr sequence or similar).
How would I output from Geometry to an image file (png)? In addition, how could I have it save and overwrite that image file every x seconds?
First, you'd need to add a rendering pipeline, which consists of a Camera COMP, Light COMP, some kind of material (a Phong MAT might work here) and a Render TOP. You'll need to make sure that you've assigned the MAT to the Geometry COMP, by dragging the MAT onto the Geo COMP's _Material_ parameter on the Render page.
Once you've done this, you'll have a rendered view of the network output as a texture in the Render TOP, which you can then save/add post effects to/whatever else you might want to do with it.
To save the texture to a file, you can use the Movie File Out TOP. Set the _Type_ parameter to Image, and then pick the file type you want via the _Image File Type_ parameter.
To repeatedly save images after an interval of time, you can use the Timer CHOP. On the Timer page of the Timer CHOP's parameters, set the _Length_ parameter to the number of seconds you want the interval to be, and then turn the _Cycle_ parameter on and the _Cycle Limit_ parameter off. Finally, on the outputs page, turn _Cycle Pulse_ on.
Then, make a CHOP reference from the newly added cycles_pulse channel to the _Add Frame_ button within the Movie File Out TOP, and it will save the image every time the timer finishes the particular interval! If the _Unique Suffix_ parameter in the Movie File Out TOP is turned off, the file will be overwritten each time, as the file name will stay the same. Hope that helps!
This is not possible with a regular webcam right? 😅
Nope, because what the Kinect is doing is actually giving you all of the 3D information of the scene and then using that data to put little boxes all over the 3D environment and then colour them using it's normal RGB camera. A webcam on it's own only contains RGB information, but it doesn't have any 3D data streams in it.
Please tell me how to add a video texture to a point cloud if I have kinect 2 ?Thanks
Sure, you'd follow the same approach described in the video, but instead of using the Kinect Azure/Kinect Azure Select TOPs, you can use the Kinect TOP and set the Image parameter to Depth Point Cloud or Color Point Cloud. Everything else should be the same from there. Hope that helps!
Thank you. 감사합니다.
Anytime! Hope you've found it useful :)
Is it possible to connect two SDKs onto TD? was trying to make a similar project with two sensors for more detailed depth data.
Great question! If you're using the Azure Kinect, you can connect as many to your computer as you have bandwidth for. If you're using the Kinect v2, however, you're limited to just one per computer. Hope that helps!
Hi I have a project I need help can I use 2 d415 real sense camera to make one point could mesh ?
You can, but you'll have to do the sensor fusion manually, unfortunately. Unless you find another app that does the combination for you or you do your best to line up the point clouds using something like the Point Transform TOP to manually orient everything together.
Can you tell me what the salary range might be for touchdesigner career?
This is dependent on a lot of factors, including location, experience, whether full-time or freelance, etc. It's worth checking out our blog (interactiveimmersive.io/blog/) for some materials on the topic or joining the Interactive & Immersive HQ PRO (interactiveimmersive.io/lp/hq-pro-full-trial/) to get assistance from industry pros to help you find the appropriate range
Could you show us how to do close loop kinect scanning if it is possible in Touchdesigner, like in this video ; Large-scale real-time mapping example using Azure Kinect thx
Actually I would not recommend buying a kinect Azure for anyone, I'm surprised nobody points out that it has terrible latency, actually worst than previous version, it is so bad it defeats the purpose of it, I sold it after a week. Maybe if they fix SDK it will be usable. serious rip off if you ask me.
Great! is it possible to do the same with kinect v2?
Yup! A few operators might be slightly different, but as long as you get the point cloud texture from the Kinect 2, you can use almost the same setup as here.
@@TheInteractiveImmersiveHQ could you recommend a tutorial for kinect2? thanks
@@tiporight Inside of The HQ PRO we have a full course about using Kinect 2 and the different ways you can use all the data (similar to this video). I'd recommend giving the free trial a try, I think you'll really enjoy it:
interactiveimmersive.io/lp/hq-pro-full-trial/
How can we save the point cloud to a file, so it can be used by other apps such as Meshlab?
Great question! To save the point cloud to a file, you can use the Movie File Out TOP. The Movie File Out TOP allows you to save .exr files using the OpenEXR format.
To do this, connect the null1 TOP to a Movie File Out TOP. Then, in the Movie File Out set the Type parameter to Image, and the Image File Type parameter to OpenEXR. On the EXR parameter page, turn on the “Save As Point Cloud” setting. Below that, you have the ability to choose how you want to save the data from the null1 TOP.
In the default configuration, you’ll only get the point positions and colors found within null1. However, you can also save the colors from null2 into the same .exr file by clicking the small “+” icon below the alpha parameter, which allows you to add additional data from a separate TOP. Drag the null2 TOP into the Additional Input TOP 1 parameter, and then make sure to rename the RGBA channels to something else, because channels with the same name will overwrite each other. I usually rename the first set of channels (directly below the “Save As Point Cloud” switch) to X, Y and Z, as I use them for position data, and then use the channels below Additional Input TOP 1 as R, G, B, and A, as I use them for colour.
Back on the Move Out page, make sure you've set the file name you want to use (under the File parameter), and then click Add Frame to save the file. If you want to save a sequence of images instead, you can change the Type parameter from Image to Image Sequence, and then turn the Record switch on (it'll record an .exr for every frame until you turn the switch off).
Hope that helps!
is it possible read diameter of ring?
Could you clarify what sort of ring you're looking to measure? Are you looking to take the measurement from the Azure's point cloud?
Yooo thats hard!!!
Haha I know right? How many developers does it take to make a colour point cloud? :)
great!
Thanks! :)
Azure rhymes with measure. “Azhurr”.
Is that the official ruling? I feel like everyone says it a bit different haha, makes my life difficult!
hello i would like to ask how i can put this in the unreal engine 5 with the owl plugin or something else?
Wondering too