PointNet | Lecture 43 (Part 1) | Applied Deep Learning

Поделиться
HTML-код
  • Опубликовано: 9 фев 2025
  • PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation
    Course Materials: github.com/maz...

Комментарии • 11

  • @hap4832
    @hap4832 3 года назад +5

    I am a student from an international university and I must say that this professor is amazing! He explains really well!

  • @simonb.jensen7825
    @simonb.jensen7825 3 года назад +4

    Super clear explanation! Thanks alot

  • @bourbiasalima7535
    @bourbiasalima7535 2 года назад +2

    Great work. Thank you

  • @김진혁-k8e
    @김진혁-k8e Год назад +1

    Great explanation. Thank you alot.

  • @benmiss1767
    @benmiss1767 2 года назад +3

    Very clear thanks !

  • @youssefhany4846
    @youssefhany4846 3 года назад +1

    please ... I didn't understand how MLP, input transform and feature transform work ? please answer me

  • @tomoki-v6o
    @tomoki-v6o Год назад +1

    n is fixed ?

    • @kamyarothman8157
      @kamyarothman8157 10 месяцев назад

      yes 1024

    • @eli_steiner
      @eli_steiner 7 месяцев назад

      no, n will not be fixed. he mentiones specifically that each point is fed through the MLPs individually. That means it doesn't matter how many points (n) you have to begin with. Also at the end, the max pool operates over all point = again it doesn't matter how many there are to begin with. 1024 is the amount of "features" that are extracted for each pointcloud

  • @vaibhav749
    @vaibhav749 2 года назад

    What is the need of transformation, anyway the point cloud won't loose it's structure before and after

    • @raimonwintzer
      @raimonwintzer 2 года назад +2

      It allows the network to find more standard representations of points clouds, which reduces the network complexity required downstream of the transformation layer to generalize to certain transformations (for example shape rotations).
      Basically: you input rotated airplane, network unrotates airplane, network doesn't need to learn hypercomplex conditional rules in further layers.