I swear I am not the guy who's posting comments on youtube but this is my fourth comment on this channel because this is solid work. Simple, clear, up-to-date, and right to the objective without losing information. Thanks!
Sir can you please demonstrate and make a separate video, how to prepare our own data set to train the PointNet model for the point cloud classification task using the lidar dataset. It would be a great help. Thanks in advance
Thanks for your explanation. I implemented this code and tried to save the Model using model.save() but showing some error. can you guide me how to save the trained model and use that in prediction .
So just from a more broader viewpoint: How can a 3-dim matrix multiplication be done anyway if there are points "missing", i.e., in a 2d-image, say of 100x100, there are always 100^2 pixels; but in our 100x100x100 room, there might only be around 50^3 pixels because the room is not fully filled. How then does this net treat these "unknown" points?
It depends on how u define the algorithm to treat the points. Often they are just skipped but u can also set them to some max depth like 10k or something like that
Thank you so much for the explanation, I have a doubt regarding the applicability of such models. Although the results look great but lets say we have a point cloud of the entire room in which these objects have to be found out how could I approach such a problem with PointNet. Correct me if I am wrong would this model work only if we have point cloud of the objects? Would segmentation help in that case? I am trying to solve a similar problem of finding out the objects within a point cloud of a building it would be great if you could create a video on how segmentation works with PointNet
Hi thank you very much! Really appreciate it. If u want to do segmentation of whole environments u can't really use the same model as in this video. Pointnet can do segmentation too, but we will look at some other methods and algorithms here on the channel on how to do segmentation on point clouds. But it really depends on if u want to do both segmentation and object recognition or only segmentation. Pointnet might be fine for the latter
I have a set of data in point cloud data format (.pcd), some are individual point cloud object, some are semantic, how could I use the Keras to create the model and perform the training and prediction??
Hi I am new to machine learning and deep learning. Topic given to us for college major project was Lidar. We searched for papers and point net was one of the relevant paper that we could collect. We are trying to implement point cloud classification. Can you suggest to us a proper flow in order to understand this project so that we can effectively execute it and present it. It will be really helpful because since this is a new topic and we are also beginners, we are not getting proper help and guidance. Please help us.
Many thanks for this great work, that really helped me so much. is it possible to explain Pointnet++ also ? If you could explain how to feed point cloud data direktly to the network, as i am having my own point cloud dataset. Great Work again !
You sampled the points from 3D mesh models giving you the same no of points in each model. I have only pointcloud data with each 3d model having different no of points. How to sample points from the pointcloud itself aso as to have equal no of points in each 3d model to give it as input to NN?
im trying for industrial areas too can you send me some article, example that you have please. Im work (as trainee) in engineering company and the files a very large and i would like do something of classification and neural network for my final paper at college .Did you have any progress on this? could you pass your email for contact
Anybody else gets this error when trying to run the parse_dataset() function? "TypeError: cannot unpack non-iterable NoneType object" I think it has something to do with glob.glob(), but I dont know exactly.
I have a point cloud segmentation problem. In my training data, I have 1 class, but on average only ~4% of all points per point cloud are of that class, and are usually found grouped together (same object). How do I balance this? If I remove most points that aren't in the class, then the point cloud will become sparse and it would be too easy to spot where the class is, since only ~8% of points will remain. Or is there a way to train this well without balancing the training data? Thanks!
@@siddhikaarunachalam3598 there’s an article I found specifically on balancing these reply and I’ll link it for you tomorrow. Basically how I remember it is you randomly rotate/scale/noise a point cloud into G(k) new point clouds. ‘k’ is the ratio of your two classes. G(k) determines the number of variants of the point cloud. If the ratio is higher than the mean ratio, then increase the number of variants, otherwise decrease to a min of 1. Currently I’m using pointnet++ in PyTorch and I don’t have to fix the class imbalance for it to train, but it certainly takes a lot longer to converge without it. Highly recommend using a better model than Point net.
I'm a computer science student and for my final paper at university I wanted to do something similar to the video but for industry do you have any tools or tips? Can you send me your contact email
I have two questions: 1. The implementation, in particular the layer sizes, are different than those reported in the PointNet paper. Why is that so? 2. You use convolutional layers to implement dense layers on 2D inputs?
@@NicolaiAI I understand that the spatial dimension of data is 3D, but they are laid out in a 2D grid. Do you use 1D convolution instead of having to flatten the data and then apply the dense layer?
Nope after the convolution layers i use global max pooling. Basically takes every output feature map from the CNN and outputs a single value for the dense layer. So thats how i go from CNN to dense
@@NicolaiAI Thanks for the answers. The original paper on PointNet leaves many open questions, including whether the T-net is trained separately (the description in appendix C sure suggests so).
Join My AI Career Program
www.nicolai-nielsen.com/aicareer
Enroll in My School and Technical Courses
www.nicos-school.com
I swear I am not the guy who's posting comments on youtube but this is my fourth comment on this channel because this is solid work. Simple, clear, up-to-date, and right to the objective without losing information. Thanks!
Thank you so much for those words! It really means a lot to me and motivates me so much :)
@@NicolaiAI Really hope there will be more contents on point clouds/colab/PointNet. Thanks again!
Sir can you please demonstrate and make a separate video, how to prepare our own data set to train the PointNet model for the point cloud classification task using the lidar dataset. It would be a great help. Thanks in advance
i am also looking for same, if u get any idea, please let me know
Hi @@vishalbhapkar2359 @PIYUSHSINGH-mu2pk
I am also trying to do this, could anyone of you do this?
Great tutorial Nicolai, thanks a lot!🍻
Glad you liked it!
Very good work and explanations, thank you so much!
Thanks a lot for watching! Glad that u liked it
please explain PointNet++ or other about Point Cloud
I will definitely do more videos about point clouds in the future. Both Processing of point clouds and more applications!
Thanks for your explanation. I implemented this code and tried to save the Model using model.save() but showing some error. can you guide me how to save the trained model and use that in prediction .
Thanks for your explanation! Will there be any more videos of PointNet semantic segmentation in the future?
Thanks for watching, yes I'll definitely try to make a video about that in the future!
Hi.....great video but where can i get the ipynb file of this code?
Thank you for the video. Can you make a video on PointNet++ with keras?
Thanks for watching! I'll take a look at it
So just from a more broader viewpoint: How can a 3-dim matrix multiplication be done anyway if there are points "missing", i.e., in a 2d-image, say of 100x100, there are always 100^2 pixels; but in our 100x100x100 room, there might only be around 50^3 pixels because the room is not fully filled. How then does this net treat these "unknown" points?
It depends on how u define the algorithm to treat the points. Often they are just skipped but u can also set them to some max depth like 10k or something like that
What should I do if I want to classify some other Gazebo model?
Thank you so much for the explanation, I have a doubt regarding the applicability of such models. Although the results look great but lets say we have a point cloud of the entire room in which these objects have to be found out how could I approach such a problem with PointNet. Correct me if I am wrong would this model work only if we have point cloud of the objects? Would segmentation help in that case? I am trying to solve a similar problem of finding out the objects within a point cloud of a building it would be great if you could create a video on how segmentation works with PointNet
Hi thank you very much! Really appreciate it. If u want to do segmentation of whole environments u can't really use the same model as in this video. Pointnet can do segmentation too, but we will look at some other methods and algorithms here on the channel on how to do segmentation on point clouds. But it really depends on if u want to do both segmentation and object recognition or only segmentation. Pointnet might be fine for the latter
thank you for the video, i would like to know how i can extract a good mnt without any problems using deep learning
I have a set of data in point cloud data format (.pcd), some are individual point cloud object, some are semantic, how could I use the Keras to create the model and perform the training and prediction??
Very useful video
Great! Thanks for watching
Thanks!
Thanks for watching
Great video!
Thank you!
Hi
I am new to machine learning and deep learning. Topic given to us for college major project was Lidar. We searched for papers and point net was one of the relevant paper that we could collect. We are trying to implement point cloud classification. Can you suggest to us a proper flow in order to understand this project so that we can effectively execute it and present it. It will be really helpful because since this is a new topic and we are also beginners, we are not getting proper help and guidance. Please help us.
Hi i am working on a similar project now. can you guide me somehow. please, any info will be helpful. How can i contact you regarding the same?
sir how can I load my own dataset into the DATA_DIR variable so that I can train with my own dataset???
I just uploaded a video about that recently, it's the newest video in the deep learning tutorial
learned from your first minute pitch
Where's the ipynb file for this tutorial?
Great video. Keep going! How can I train my own custom dataset and implement it in the pointnet model?
U would just need to do the exact same thing but just with ur data and labels instead of the data used in this video
@@NicolaiAI How do you format the data and labels ? Is it all as pcd files in a zip
Many thanks for this great work, that really helped me so much.
is it possible to explain Pointnet++ also ?
If you could explain how to feed point cloud data direktly to the network, as i am having my own point cloud dataset.
Great Work again !
Thank you very much! I'll take a look at pointnet++ and see what I can do
@@NicolaiAI Thanks alot
This is some great stuff
Thank you very much! I appreciate that
Can you explain pointnet++ in python as well
You sampled the points from 3D mesh models giving you the same no of points in each model. I have only pointcloud data with each 3d model having different no of points. How to sample points from the pointcloud itself aso as to have equal no of points in each 3d model to give it as input to NN?
sir please make video on semantic segmentation using pointnet
Where can i find your github link?
Is it possible to create a neural network that segments by category, for example in industrial areas, and very large files?
Yes segmentation is actually a pretty big thing within neural networks and deep learning
@@NicolaiAI I spend a lot of time cleaning and separating, piping, civil construction, electrical panels, anyway
@@NicolaiAI I need your teachings haha
im trying for industrial areas too can you send me some article, example that you have please. Im work (as trainee) in engineering company and the files a very large and i would like do something of classification and neural network for my final paper at college .Did you have any progress on this? could you pass your email for contact
Can you do a descriptive video on 3d object detection from lidar data using the kitti dataset? And explain the code
How to save this model? Can you please guide
Can we use this code with kitti dataset?
Anybody else gets this error when trying to run the parse_dataset() function?
"TypeError: cannot unpack non-iterable NoneType object"
I think it has something to do with glob.glob(), but I dont know exactly.
I have a point cloud segmentation problem. In my training data, I have 1 class, but on average only ~4% of all points per point cloud are of that class, and are usually found grouped together (same object).
How do I balance this?
If I remove most points that aren't in the class, then the point cloud will become sparse and it would be too easy to spot where the class is, since only ~8% of points will remain.
Or is there a way to train this well without balancing the training data?
Thanks!
Even I have one class, did you find solution for this?
@@siddhikaarunachalam3598 there’s an article I found specifically on balancing these reply and I’ll link it for you tomorrow.
Basically how I remember it is you randomly rotate/scale/noise a point cloud into G(k) new point clouds. ‘k’ is the ratio of your two classes. G(k) determines the number of variants of the point cloud. If the ratio is higher than the mean ratio, then increase the number of variants, otherwise decrease to a min of 1.
Currently I’m using pointnet++ in PyTorch and I don’t have to fix the class imbalance for it to train, but it certainly takes a lot longer to converge without it. Highly recommend using a better model than Point net.
@@critopadolf5534 can you link this article for me please
I'm a computer science student and for my final paper at university I wanted to do something similar to the video but for industry do you have any tools or tips? Can you send me your contact email
I have two questions:
1. The implementation, in particular the layer sizes, are different than those reported in the PointNet paper. Why is that so?
2. You use convolutional layers to implement dense layers on 2D inputs?
There are different versions of pointnet and for different applications. The data is 3d in this example thus CNN but u would also use CNN on 2d data
@@NicolaiAI I understand that the spatial dimension of data is 3D, but they are laid out in a 2D grid. Do you use 1D convolution instead of having to flatten the data and then apply the dense layer?
Nope after the convolution layers i use global max pooling. Basically takes every output feature map from the CNN and outputs a single value for the dense layer. So thats how i go from CNN to dense
@@NicolaiAI Thanks for the answers. The original paper on PointNet leaves many open questions, including whether the T-net is trained separately (the description in appendix C sure suggests so).
Gotta admit that it's quite some time since i read the paper
Thank you. Can you do same with pytorch or share me a link that will help? thanks
I'll be doing a pytorch tutorial here on the channel as well. Thanks for watching
@@NicolaiAI I am really looking forward to this. Hopefully, I get a notification when you do. Thanks
thankyou!
Thanks for watching!
Thank you!!!!!!!! really
Thanks for watching!
@@NicolaiAI I'll look forward to the next video.
Can u do point net but for semantic segmentatio, I am really in need of an explanation.
Yeah I will in the near future
can someone give me the code explained
where's the original source code?
On my GitHub under point clouds repo