Thanks a lot! In that case, you could use a mix of sharpness measure (i.e. how good you respect "objects" borders), and F1-scores if you have ground truth data at the instance level. Else, you can use clustering metrics but they are not as pertinent
Thank you for the great video! I have a question regarding line 44 of the code --> why is 'nn_distance' calculated based on 'pcd' and not 'pcd_downsampled'?
Thanks! Great question! if the downward processes are using the downsampled, then, it should be the pcd_downsampled... But I may have done a dirty trick there, I will get back into the code and reach out thereafter
Thanks a lot. I have started learning about 3D reconstruction. At 5:50, I saw a room with very dense point cloud. I wonder that how to create a dense point cloud such that because in my understanding, have very few texture on the wall to detect. I use COLMAP to reconstruct my 3D model
For dense point clouds like this, with textureless walls, you need an active sensor such as a LiDAR sensor (here it is a terrestrial laser scanner). Else, using photogrammetric pipeline and computer vision (i.e. Colmap) will struggle with textureless objetcs! I hope this helps you!
@@FlorentPoux my dataset is about BTS tower, I have used colmap, and after running MVS, there are very few keypoints in the antennas, so I am becoming stuck in that stituation. I want a dense point cloud in the surface of the antennas. Could you give me an advise, please?
Thanks a lot. This serie is really great. I am in the process of learning to get point clouds in to usable models to be used for VR and this is realy helpfull. Have you evere tried to use the output model for VR/AR applications?
Thank you very much Florent for such a clear, concise and informative example. Best of luck in this channel. I have an interesting case where I need to find the valves in a Gas plant (full of pipes). I'm not sure the RANSAC / DBSCAN approach will work here or I will probably need some other way to manually label the valves and then feed into a PointNet++ or VoteNet. Do you have any tips for me?
Thanks a lot! Great use case you have! So yes indeed, you can maybe work into a two step proces: data labelling with the help of unsupervised approaches like this workflow; feeding the labelled dataset to Random Forest and PointNet++. You could 1 detect the ground, 2 detect walls, 3 cluster remaining elements, 4 add a conditional rule to give a class to each cluster (if detected from planes, it is a ground if horizontal, else walls; if a cluster has a shape of XXX, it likely is a hosting element of a valve). From that, you check out this video: ruclips.net/video/rsWcT3HSXf4/видео.html , and give each segment as an input to the visualizer to confirm the label. Then the next stage is supervised learning with PointNet++ or other networks (you can check out the 3D Deep Learning Course at learngeodata.eu for that). I hope this helps!
Thanks for sharing; the content is great. Would you recommend open3d for segmenting organic scans, such as head or facial? For instance, to segment a head scan into body features.
Once again, amazing tutorial! Thank you very much! If eventually I would want to label the data (accordinng to this segmentation)- what would be the best way to do so? Also, is there a way to preserve the RGB data? Thank you very much 🙏
Thanks a lot! So indeed, you can preserve the segmentation and use it as labelling. You can approach this by first creating a table of labels you want, and then matching clusters to the wanted labels. Concerning RGB, you can indeed keep it! For example, you can use an external array and link it based on point indexes. I hope this helps you!
Thanks a Lot Christian! Hmm, great question! I think you could use an highlighting technique and try to grasp the output vector within Bledner, or Just to compute a 3D Alpha Shape or more precise derivative to get the vector outside of Blender! To answer the easy / hard question, I think getting the perfect shape is closer to the hard part, but fun :)
@@FlorentPoux Thank you for replying! I played with the notebook, to be more specific with alphashapes and I’m thinking to increase the alpha for more details and then to make the lines smoother with algorithms like PAEK or Bezier interpolation. In this way I think that the shapes will be more “accurate” if I can say it like that and more aesthetically pleasing to the eye.
Just subscribed aswell, great video, was thinking in the loop of the RANSAC would it be possible to export the segments as individual meshes/point files, so they could be brought into another software?
Thanks! Welcome :)! Haha, you have the right mindset, indeed, this is feasible, in one of the course line at the 3D Geodata Academy I show the process on how to do that. But basically, it comes to adding some lines within the loop to extract each segmetn as a single file
Thank you for this informative video. I have a problem with the remove_statistical_outlier method, it's running forever without sending any message, do you have any idea why?
Thanks! This is due (from what I read) to your search parameters. For example, if your search condition (radius) takes all the point of your point cloud, this is normal. Try identifying the average distance between points, and set a research condition below this to check out if this solves your issue.
i do have a question . now that you finished your segmentation . and want to do measurements like calculating the wall surface or distance between two wall how would you do that . can u do that in python too ?
Great video. I am a professional land surveyor and would like to learn these skills to improve my business. Which of your courses would you recommend for someone who mostly scans interiors for architectural modeling?
Thanks a lot @MrTootall21! Awesome that you want to dive more! I would then recommend the Point Cloud Processor Course that gives an end-to-end workflow for 3d point cloud processing (laser scanning, photogrammetry, lidar...)
Hey bro, I have a point cloud it’s a triangle one, so I have 3200 columns, because it does 3200 points in 1 profile. I can save it as cloud matrix csv.
hello Florent, Im very interested in your work, I wanted to ask you about your sdpecs for running jupyterlab with these examples you post, because I keep getting my kernel killed running the code, (specifically with the translate method) Ive tried to find an answer but no solution so far, Ive tried on ubuntu and windows, with 8 ans 32 gb ram, and nvidia cards, im stuck, do you have cuda running on your env?, how can I avoid the kernel problem? (I have a similar error in pycharm btw)
Hey, so you can basically get up and running with a lowkey config (E.G. no GPU, 8 GB of RAM, and 5 Gb of HDD Space). I prefer, especially with 3D stuff, to work locally without using cloud kernels. So, best is to set up a local environment. This should help you there: ruclips.net/video/82mihomheRM/видео.html&lc=UgwRyHu6GYGsR2Jf-hd4AaABAg
I had the same problem as yours. Both methods "paint_uniform_color" and "translate" killed my python kernel (3.11). Downgrading numpy version from 2.1 to 1.26.4 solved the problem. (I've a laptop, no GPU, 16gb ram)
im trying to follow all the steps but stuck at Data pre processing, all the time I'm getting RPly: Unable to open file I tried different ply files, but unable to ready! please if anyone can help?
alright i fixed it! on windows I have to insert the complete file path. in DATANAME = "E:/Spyder/Spyder/data/230725_edit.ply" this fixed the reading error.
Nice and easy Explanation. Expecting more upload, specially on SLAM and Registration.
Thanks a lot! I am on it, doing my best! SLAM and Registration coming in real soon 🧔
Great video Florent. Subscribed!
Much appreciated!
Excellent content. I am wondering what methods or approaches we should use to evaluate the segmented point cloud using RANSAC?
Thanks a lot! In that case, you could use a mix of sharpness measure (i.e. how good you respect "objects" borders), and F1-scores if you have ground truth data at the instance level. Else, you can use clustering metrics but they are not as pertinent
Thank you for the great video! I have a question regarding line 44 of the code --> why is 'nn_distance' calculated based on 'pcd' and not 'pcd_downsampled'?
Thanks! Great question! if the downward processes are using the downsampled, then, it should be the pcd_downsampled... But I may have done a dirty trick there, I will get back into the code and reach out thereafter
Thanks a lot. I have started learning about 3D reconstruction. At 5:50, I saw a room with very dense point cloud. I wonder that how to create a dense point cloud such that because in my understanding, have very few texture on the wall to detect. I use COLMAP to reconstruct my 3D model
For dense point clouds like this, with textureless walls, you need an active sensor such as a LiDAR sensor (here it is a terrestrial laser scanner). Else, using photogrammetric pipeline and computer vision (i.e. Colmap) will struggle with textureless objetcs! I hope this helps you!
@@FlorentPoux my dataset is about BTS tower, I have used colmap, and after running MVS, there are very few keypoints in the antennas, so I am becoming stuck in that stituation. I want a dense point cloud in the surface of the antennas. Could you give me an advise, please?
Thanks a lot. This serie is really great. I am in the process of learning to get point clouds in to usable models to be used for VR and this is realy helpfull.
Have you evere tried to use the output model for VR/AR applications?
Thanks for the kind words! Awesome! Yes I have a working solutions for that, so let me put that as a guide on the roadmap 2024 ;)
Can you please do a video about Ransac for outlier removal in a point cloud?
Yes I can! Coming out this week ! 🍒🌞
hey, great work. how can we achieve the coordinates/convexhull/polygon/bbox for the segmented parts?
I will do a tutorial on this! Else, you can leverage Open3D to get these elements
@@FlorentPoux yes I tried obb, convexhull, and even aplhashape. all aren't working like the polygon works in 2d
@@FlorentPoux i tried open3d but it aint good.
Thank you very much Florent for such a clear, concise and informative example. Best of luck in this channel.
I have an interesting case where I need to find the valves in a Gas plant (full of pipes).
I'm not sure the RANSAC / DBSCAN approach will work here or I will probably need some other way to manually label the valves and then feed into a PointNet++ or VoteNet.
Do you have any tips for me?
Thanks a lot!
Great use case you have! So yes indeed, you can maybe work into a two step proces: data labelling with the help of unsupervised approaches like this workflow; feeding the labelled dataset to Random Forest and PointNet++.
You could 1 detect the ground, 2 detect walls, 3 cluster remaining elements, 4 add a conditional rule to give a class to each cluster (if detected from planes, it is a ground if horizontal, else walls; if a cluster has a shape of XXX, it likely is a hosting element of a valve). From that, you check out this video: ruclips.net/video/rsWcT3HSXf4/видео.html , and give each segment as an input to the visualizer to confirm the label. Then the next stage is supervised learning with PointNet++ or other networks (you can check out the 3D Deep Learning Course at learngeodata.eu for that). I hope this helps!
Thanks for sharing; the content is great. Would you recommend open3d for segmenting organic scans, such as head or facial? For instance, to segment a head scan into body features.
Hi Ronnie! Yes indeed, this works the same for body features!
Hi, thanks for the tutorial. Do these texhniques apply to point cloud (.ply) generated by Gaussian Splatting?
Yes they do! A video is in progress on it haha!
@@FlorentPoux awesome 😎🔥
Once again, amazing tutorial! Thank you very much!
If eventually I would want to label the data (accordinng to this segmentation)- what would be the best way to do so?
Also, is there a way to preserve the RGB data?
Thank you very much 🙏
Thanks a lot! So indeed, you can preserve the segmentation and use it as labelling. You can approach this by first creating a table of labels you want, and then matching clusters to the wanted labels. Concerning RGB, you can indeed keep it! For example, you can use an external array and link it based on point indexes. I hope this helps you!
Sir, Very informative. At the end of the video, you talk about finding the dimensions of each plane and door. How do we do that to cal
culate area ?
I will do another video on it and keep you posted, it is in my todo list now!
Hello, thank you for this amazing tutorial. How hard would it be to trace the edge of every segment and vectorize it?
Thanks a Lot Christian! Hmm, great question! I think you could use an highlighting technique and try to grasp the output vector within Bledner, or Just to compute a 3D Alpha Shape or more precise derivative to get the vector outside of Blender! To answer the easy / hard question, I think getting the perfect shape is closer to the hard part, but fun :)
@@FlorentPoux Thank you for replying! I played with the notebook, to be more specific with alphashapes and I’m thinking to increase the alpha for more details and then to make the lines smoother with algorithms like PAEK or Bezier interpolation. In this way I think that the shapes will be more “accurate” if I can say it like that and more aesthetically pleasing to the eye.
Just subscribed aswell, great video, was thinking in the loop of the RANSAC would it be possible to export the segments as individual meshes/point files, so they could be brought into another software?
Thanks! Welcome :)! Haha, you have the right mindset, indeed, this is feasible, in one of the course line at the 3D Geodata Academy I show the process on how to do that. But basically, it comes to adding some lines within the loop to extract each segmetn as a single file
Thank you for this informative video. I have a problem with the remove_statistical_outlier method, it's running forever without sending any message, do you have any idea why?
Thanks! This is due (from what I read) to your search parameters. For example, if your search condition (radius) takes all the point of your point cloud, this is normal. Try identifying the average distance between points, and set a research condition below this to check out if this solves your issue.
i do have a question . now that you finished your segmentation . and want to do measurements like calculating the wall surface or distance between two wall how would you do that . can u do that in python too ?
Yes you can! You can use Open3D to do this interactively
Thanks, subscribed
Thanks for the sub!
How to find an object dimensions???
you can extract the Bounding box for example
Hi! Is there a way to detect features or geometrical shapes in real time from a scanner laser ? Thanks for all your videos !
Yep! but usually this is tied to C++ or having nice wrapper to execute code in real time.
Great video. I am a professional land surveyor and would like to learn these skills to improve my business. Which of your courses would you recommend for someone who mostly scans interiors for architectural modeling?
Thanks a lot @MrTootall21! Awesome that you want to dive more! I would then recommend the Point Cloud Processor Course that gives an end-to-end workflow for 3d point cloud processing (laser scanning, photogrammetry, lidar...)
Hey bro, I have a point cloud it’s a triangle one, so I have 3200 columns, because it does 3200 points in 1 profile.
I can save it as cloud matrix csv.
Great video, but please turn down the background music, while you talk, that really distracted me.
Thanks! I did for the follow up :), indeed, learning progressively the art of RUclips :)
Hey, can we just click on a point cloud to pick a single point instead of picking a plane?
yes this is feasible as well
hello Florent, Im very interested in your work, I wanted to ask you about your sdpecs for running jupyterlab with these examples you post, because I keep getting my kernel killed running the code, (specifically with the translate method) Ive tried to find an answer but no solution so far, Ive tried on ubuntu and windows, with 8 ans 32 gb ram, and nvidia cards, im stuck, do you have cuda running on your env?, how can I avoid the kernel problem? (I have a similar error in pycharm btw)
Hey, so you can basically get up and running with a lowkey config (E.G. no GPU, 8 GB of RAM, and 5 Gb of HDD Space). I prefer, especially with 3D stuff, to work locally without using cloud kernels. So, best is to set up a local environment. This should help you there: ruclips.net/video/82mihomheRM/видео.html&lc=UgwRyHu6GYGsR2Jf-hd4AaABAg
I had the same problem as yours. Both methods "paint_uniform_color" and "translate" killed my python kernel (3.11). Downgrading numpy version from 2.1 to 1.26.4 solved the problem. (I've a laptop, no GPU, 16gb ram)
@@TheDudule01same here. Downgrading to numpy 1.xx.xx worked for me.
im trying to follow all the steps but stuck at Data pre processing, all the time I'm getting RPly: Unable to open file
I tried different ply files, but unable to ready! please if anyone can help?
alright i fixed it! on windows I have to insert the complete file path. in DATANAME = "E:/Spyder/Spyder/data/230725_edit.ply" this fixed the reading error.
haha, these are the best bugs, it usually takes some time to get found haha! Thanks for the heads up that you found the solution 😊
brilliant!
Thanks so muc! Very happy this helps you!
This is a very planar example, does it work for a series of concave/convex elements or simply domain specific to planes?
Yes, but we need to adjust the RANSAC underlying model from a plane to another geometry (E.g. a sphere, a cylinder, ...)