Understanding Graph Attention Networks

Поделиться
HTML-код
  • Опубликовано: 21 янв 2025
  • НаукаНаука

Комментарии • 186

  • @NimaDmc
    @NimaDmc 2 года назад +49

    I can admit that this is the best explanation for GAT and GNN one can find. Fantastic explanation with very simple English. The quality of sound and video is great as well. Many thanks.

    • @DeepFindr
      @DeepFindr  2 года назад +2

      Thank you for your kind words

  • @xorenpetrosyan2879
    @xorenpetrosyan2879 2 года назад +4

    This is the best and most in detail explanation on Graph CNN attention I've found. Great job!

  • @kenbobcorn
    @kenbobcorn 3 года назад +26

    This was simply a fantastic explanation video, I really do hope this video gets more coverage than it already has. It would be fantastic if you were to explain the concept of multi-head attention in another video. You've earned yourself a subscriber +1.

    • @DeepFindr
      @DeepFindr  3 года назад +1

      Thank you, I appreciate the feedback!
      Sure, I note it down :)

  • @VenkataRahul_S
    @VenkataRahul_S 23 дня назад

    Simple, clear. It makes a lot of sense to go thru this video with a slate and chalk in hand. THe mathematics is very well explained. Thank you

  • @leorayder-r5x
    @leorayder-r5x 10 месяцев назад +1

    amazing!!! author well done!!!

  • @tobigm1917
    @tobigm1917 11 месяцев назад

    Thank you very much! This was my introduction into GAT and helped me to immediately get a good grasp of the basic concept :) I like the graphical support you provide to the explanation, it's gerat!

  • @Marauder13
    @Marauder13 5 месяцев назад

    This might be the best and simple explanation of GAT one can ever find! Thanks man

  • @mydigitalwayia956
    @mydigitalwayia956 3 года назад +1

    Muchas gracias por el video. Despues de haber visto muchos otros, puedo decir que el suyo es el mejor, el mas sencillo de entender. Estoy muy agradecido con usted. Saludos

  • @snsacharya1737
    @snsacharya1737 5 месяцев назад

    A wonderful and succinct explanation with crisp visualisations about both the attention mechanism and the graph neural network. The way the learnable parameters are highlighted along with the intuition (such as a weighted adjacency matrix) and the corresponding matrix operations is very well done.

  • @jianxianghuang1275
    @jianxianghuang1275 3 года назад +5

    I especially love your background pics.

  • @anupr567
    @anupr567 2 года назад +2

    Explained in terms of basic Neural Network terminologies!! Great work 👍

  • @pu239
    @pu239 3 года назад +3

    This is pretty amazing content. The way you explain the concept is pretty great and I especially like the visual style and very neat looking visuals and animations you make. Thank you!

    • @DeepFindr
      @DeepFindr  3 года назад +1

      Thank you for your kind words :)

  •  Год назад

    Your work has been an absolute game-changer for me! The way you break down complex concepts into understandable and actionable insights is truly commendable. Your dedication to providing in-depth tutorials and explanations has tremendously helped me grasp the intricacies of GNNs. Keep up the phenomenal work!

  • @hlew2694
    @hlew2694 Год назад

    This is the MOST BEST video of GCN and GAT, very great, thank you!

  • @adityashahane1429
    @adityashahane1429 2 года назад +3

    very well explained, provides a very intuitive picture of the concept. Thanks a ton for this awesome lecture series!

  • @kshitijdesai2402
    @kshitijdesai2402 3 месяца назад

    I found it hard to follow initially but after understanding GCNN thoroughly, this video is a gem.

  • @celestchowdhury2605
    @celestchowdhury2605 2 года назад

    very good explanation! clear and crisp, even I, a beginner, feeling satisfied after watching this. Should get more recognition!

  • @anastassiya8526
    @anastassiya8526 4 месяца назад

    it was the best explanation that gave me hope for the understanding these mechanisms. Everything was so good explained and depicted, thank you!

  • @samuel2318
    @samuel2318 2 года назад +1

    Clear explanation and visualization on attention mechanism. Really helpful in studying GNN.

  • @mohammadrzakarimi2140
    @mohammadrzakarimi2140 2 года назад +1

    Your visual explanation is super great, help many people to learn some-hour stuff in minutes!
    Please make more videos on specialized topics of GNNs!
    Thanks in advance!

    • @DeepFindr
      @DeepFindr  2 года назад

      I will soon upload more GNN content :)

  • @nurkleblurker2482
    @nurkleblurker2482 2 года назад +2

    Extremely helpful. Very well explained in concrete and abstract terms.

  • @alexvass
    @alexvass Год назад

    Thanks

  • @牢獄プンレク
    @牢獄プンレク 3 года назад +6

    Amazingly easy to understand. Thank you.

  • @chrispapadakis3965
    @chrispapadakis3965 3 года назад +2

    Just for anyone confused, in accordance to the illustration in the summary the weight matrix should have 5 rows instead of 4 that are shown in the video.
    Great video and I admire the fact that your topics of choice are really into the latest hot staff of ML!

  • @raziehrezaei3156
    @raziehrezaei3156 3 года назад +1

    such an easy-to-grasp explanation! such a visually nice video! amazing job!

    • @DeepFindr
      @DeepFindr  3 года назад

      Thanks, I appreciate it :)

  • @toluolu9390
    @toluolu9390 2 года назад +1

    Very well explained. Thank you very much!

  • @sadhananarayanan1031
    @sadhananarayanan1031 Год назад

    Thank you so much for this beautiful video. Have been trying out too many videos on GNN and GAN but this video definitely tops. I finally understood the concept behind it. Keep up the good work :)

  • @AkhmadMizkat
    @AkhmadMizkat Год назад

    This is a very great explanation covering basic GNN and the GAT. Thank you so much

  • @Moreahead1
    @Moreahead1 Год назад

    clearly clear explanation, super best video lecture about GNN ever seen.

  • @NadaaTaiyab
    @NadaaTaiyab 2 года назад +1

    I'd love it if you could explain multi-head attention as well. You really have such a good grasp of this very complex subject.

    • @DeepFindr
      @DeepFindr  2 года назад

      Hi! Thanks!
      Multi-head attention simply means that several attention mechanisms are applied at the same time. It's like cloning the regular attention.
      What exactly is unclear here? :)

    • @NadaaTaiyab
      @NadaaTaiyab 2 года назад

      @@DeepFindr The math and code are hard to fully grasp. If you could break down the linear algebra with the matrix diagrams as you have done for single head attention, I think people would find that very helpful.

  • @NadaaTaiyab
    @NadaaTaiyab 2 года назад +1

    Great! Thank you for explaining the math and the linear algebra with the simple tables.

  • @wenqichen4151
    @wenqichen4151 3 года назад

    I really salute you for this detailed video! that's very intriguing and clear! thank you again!

  • @Eisneim1
    @Eisneim1 Год назад

    very helpful tutorial, clearly explained!

  • @dharmendraprajapat4910
    @dharmendraprajapat4910 2 года назад

    4:00 do you multiply "feature node matrix" with "adjacency matrix" before multiplying it with "learnable weight matrix" ?

  • @sapirharary8262
    @sapirharary8262 3 года назад +2

    Great video! your explanation was amazing. Thank you!!

  • @waelmikaeel4244
    @waelmikaeel4244 2 месяца назад

    Great job mate, keep it up

  • @kodjigarpp
    @kodjigarpp 3 года назад

    Thank you for sharing this clear and well-designed explanation.

  • @hainingliu3471
    @hainingliu3471 Год назад

    Very clear explanation. Thank you!

  • @mahmoudebrahimkhani1384
    @mahmoudebrahimkhani1384 Год назад

    simple and informative! Thank you!

  • @marcusbluestone2822
    @marcusbluestone2822 Год назад

    Very clear and helpful. Thank you so much!

  • @mamore.
    @mamore. 3 года назад

    most understandable explanation so far!

  • @huaiyuzheng5577
    @huaiyuzheng5577 3 года назад +2

    Very nice video. Thanks for your work~

  • @HSP_ASMR
    @HSP_ASMR 3 года назад

    Very Helpful Explanation! Thank you!

  • @EDward-u1f6i
    @EDward-u1f6i Год назад

    best video for learning GNN thank you so much!

  • @陈肇坤
    @陈肇坤 2 года назад +1

    Good explanation to the key idea. One question, what is the difference between GAT and self attention constrained by a adjacency matrix(eg. Softmax(Attn*Adj) )? The memory used for GAT is D*N^2, which is D times of the intermediate ouput of SA. The node number of graph used in GAT thus cannot be too large because of memory size. But it seems that they both implement dynamic weighting of neighborhood information constrained by a adjacency matrix.

    • @DeepFindr
      @DeepFindr  2 года назад

      Hi,
      Did you have a look at the implementation iny PyG? pytorch-geometric.readthedocs.io/en/latest/_modules/torch_geometric/nn/conv/gat_conv.html#GATConv
      One of the key tricks in GNNs is usually to represent the adjacency matrix in COO format. Therefore you have adjacency lists and not a nxn matrix.
      Using functions like gather or index_select you can then do a masked selection of the local nodes.
      Hope this helps :)

  • @amansah6615
    @amansah6615 2 года назад

    easy and best explanation
    nice work

  • @omarsoud2015
    @omarsoud2015 2 года назад

    Thanks for the best explanation.

  • @benjamintan3069
    @benjamintan3069 2 года назад

    I need more Graph Neural Network related video!!

    • @DeepFindr
      @DeepFindr  2 года назад

      There will be some more in the future. Anything in particular you are interested in? :)

  • @geletamekonnen2323
    @geletamekonnen2323 2 года назад

    Thank you bro. Confused head now gets the idea about GNN.

  • @sukantabasu
    @sukantabasu 11 месяцев назад

    Simply exceptional!

  • @eelsayed9380
    @eelsayed9380 2 года назад +1

    Great explination, really appretiated.
    If you Please could u make a videa explain the loss calculation and backpropagation in gnn?

  • @philipkamau6288
    @philipkamau6288 3 года назад

    Thanks for sharing the knowledge!

  • @SylwiaNano
    @SylwiaNano 2 года назад

    Thx for the awesome explanation!
    A video with attention in CNN e.g. UNet would be great :)

    • @DeepFindr
      @DeepFindr  2 года назад

      I slightly capture that in my video on diffusion models. I've noted it down for the future though.

  • @leo.y.comprendo
    @leo.y.comprendo 3 года назад

    I learned so much from this video! Thanks a lot

  • @Jorvanius
    @Jorvanius 3 года назад

    Excellent job, mate 👍👍

  • @salahaldeen1751
    @salahaldeen1751 Год назад

    Wonderful explination! thanks

  • @cw9249
    @cw9249 Год назад

    thank you. what if you also wanted to have edge features?

    • @DeepFindr
      @DeepFindr  Год назад

      Hi, I have a video on how to use edge features in GNNs :)

  • @n.a.7271
    @n.a.7271 2 года назад

    how is learnable weight matrix is formed ? have some material to understand it better?

    • @DeepFindr
      @DeepFindr  2 года назад

      This simply comes from dense (fully connected layers). There are lots of resources, for example here: analyticsindiamag.com/a-complete-understanding-of-dense-layers-in-neural-networks/#:~:text=The%20dense%20layer's%20neuron%20in,vector%20of%20the%20dense%20layer.

  • @farzinhaddadpour7192
    @farzinhaddadpour7192 Год назад

    Very nice, thanks for effort!

  • @Ssc2969
    @Ssc2969 Год назад

    Fantastic explaination.

  • @sajjadayobi688
    @sajjadayobi688 2 года назад

    A great explanation, many thanks

  • @kevon217
    @kevon217 Год назад

    Great walkthrough.

  • @kanalarchis
    @kanalarchis 3 года назад

    At 11:30, should the denominator have k instead of j?
    Also, this vector w_a, is it the same vector used for all edges, there isn't a different vector to learn for each node i, right? Thank you!

    • @DeepFindr
      @DeepFindr  3 года назад

      Ohh yeah you are right. Should be k...
      Yes its a shared vector, used for all edges. Thank you for the finding!

  • @dariomendoza6079
    @dariomendoza6079 2 года назад

    Excellent explanation 👌 👏🏾

  • @mbzf2773
    @mbzf2773 3 года назад

    Thank you so much for this great video.

  • @PaxonFrady
    @PaxonFrady 4 месяца назад

    why would the attention adjacency matrix be symmetrical? If the weight vector is learnable, then it does matter which order the two input vectors are concatenated. It doesn't seem like there would be any reason to enforce symmetry.

  • @dominikklepl7991
    @dominikklepl7991 3 года назад +3

    Thank you for the great video. I have one question, what happens if weighted graphs are used with attention GNN? Do you think adding the attention-learned edge "weights" will improve the model compared to just having the input edge weights (e.g. training a GCNN with weighted graphs)?

    • @DeepFindr
      @DeepFindr  3 года назад +2

      Hi! Yes I think so. The fact that the attention weights are learnable makes them more powerful than just static weights.
      The model might still want to put more attention on a node, because there is valuable information in the node features, independent of the weight.
      A real world example of this might be the data traffic between two network nodes. If less data is sent between two nodes, you probably assign a smaller weight to the edge. Still it could be that the information coming from one nodes is very important and therefore the model pays more attention to it.

  • @metehkaya96
    @metehkaya96 3 месяца назад

    Perfect video to understand GATs. However, I guess, you forgot to add sigmoid function when you demonstrate h1' as a sum of multiplications of hi* and attention values, in the last seconds of the video: 13:51

  • @RyanOng-t2o
    @RyanOng-t2o Год назад

    Thanks for the great explanation! Just one thing that I do not really understand, may I ask how do you get the size of the learnable weight matrix [4,8]? I understood that there are 4 rows due to the number of features for each node. However, not sure where the 8 columns come from.

    • @mistaroblivion
      @mistaroblivion Год назад

      I think 8 is the arbitrarily chosen dimensionality of the embedding space.

  • @arnaiztech
    @arnaiztech 3 года назад

    Outstanding explanation

  • @MaryamSadeghi-u6u
    @MaryamSadeghi-u6u 3 месяца назад

    Greta Video, thank you!

  • @zacklee5787
    @zacklee5787 7 месяцев назад

    I have come to understand attention as key, query, value multiplication/addition. Do you know why this wasn't used and if it's appropriate to call it attention?

    • @DeepFindr
      @DeepFindr  7 месяцев назад

      Hi,
      Query / Key / Value are just a design choice of the transformer model. Attention is another technique of the architecture.
      There is also a GNN Transformer (look for Graphormer) that follows the query/key/value pattern. The attention mechanism is detached from this concept and is simply a way to learn importance between embeddings.

  • @Bwaaz
    @Bwaaz 11 месяцев назад

    Great quality thank you !

  • @sharadkakran531
    @sharadkakran531 3 года назад +4

    Hi, Can you tell which tool you're using to make those amazing visualizations? All of your videos on GNNs are great btw :)

    • @DeepFindr
      @DeepFindr  3 года назад +1

      Thanks a lot! Haha I use active presenter (it's free for the basic version) but I guess there are better alternatives out there. Still experimenting :)

  • @sqliu9489
    @sqliu9489 2 года назад

    Thanks for the video! There's a question: at 13:03, I think the 'adjacency matrix' consists of {e_ij} could be symmetric, but after the softmax operation, the 'adjacency matrix' consists of {α_ij} should not be symmetric any more. Is that right?

    • @DeepFindr
      @DeepFindr  2 года назад

      Yes usually the attention weights do not have to be symmetric. Is that what you mean? :)

    • @sqliu9489
      @sqliu9489 2 года назад

      @@DeepFindr Yes. Thanks for your reply!

  • @etiennetiennetienne
    @etiennetiennetienne 2 года назад

    why replacing dot product attn with concat proj + leaky relu?

    • @DeepFindr
      @DeepFindr  2 года назад

      That's a good point. I think the TransformerConv is the layer that uses dot product attention. I'm also not aware of any reason why it was implemented like that. Maybe it's because this considers the direction of information (so source and target nodes) better. Dot product is cummutative, so i*j is the same as j*i, so it can't distinguish between the direction of information flow. Just an idea :)

  • @anvuong1099
    @anvuong1099 2 года назад

    Thank you for wonderful content

  • @zheed4555
    @zheed4555 Год назад

    This is very helpful!

  • @clayouyang2157
    @clayouyang2157 3 года назад

    weight vector are dependent on the nunber of node in graph? if i have a large of graph, i will got a bigger dimension weight vector?

    • @DeepFindr
      @DeepFindr  3 года назад

      No the weight vector has a fixed size. It is applied to each node feature vector. For example if you have 5 nodes and a feature size of 10, then the weight matrix with 128 neurons could be (10, 128). If you have more nodes, just the batch dimension is bigger.
      Hope this answers the question :)

    • @clayouyang2157
      @clayouyang2157 3 года назад

      @@DeepFindr thank you so much

    • @corwinbroekhuizen3619
      @corwinbroekhuizen3619 3 месяца назад

      ​@@DeepFindris the generic gnn weighting matrix the same matrix for the entire graph or is it a different matrix for each node but applied to all the neighbours? Also, how does it deal with heterogeneous data where the input feature vectors dimensions are different?

  • @nazarzaki44
    @nazarzaki44 2 года назад

    Great video! Thank you

  • @ayushsaha5539
    @ayushsaha5539 2 года назад

    Why does the new state calculated have more features than the original state? I dont understand

    • @DeepFindr
      @DeepFindr  2 года назад

      It's because the output dimension (neurons) of the neural network is different then the input dimension.
      You could also have less or the same number of features.

  • @AbleLearners
    @AbleLearners Год назад

    A Great explanation

  • @nastaranmarzban1419
    @nastaranmarzban1419 2 года назад

    Hi hope you're doing well
    Is there any graph neural network architecture that receives multivariate dataset instead of graph-structured data as an input?
    I'll be very thankful if you answer me i really nead it
    Thanks in advanced

    • @DeepFindr
      @DeepFindr  2 года назад

      Hi! As the name implies, graph neural networks expect graph structured input. Please see my latest videos on how to convert a dataset to a graph. It's not that difficult :)

    • @nastaranmarzban1419
      @nastaranmarzban1419 2 года назад

      @@DeepFindr thanks for prompt response
      Sure; I'll see it right now..
      Would you please sent its link?

    • @DeepFindr
      @DeepFindr  2 года назад

      ruclips.net/video/AQU3akndun4/видео.html

  • @ilyasaroui7745
    @ilyasaroui7745 2 года назад

    how do you think it will behave with complete graphs only ?

    • @DeepFindr
      @DeepFindr  2 года назад +1

      Well it will simply calculate attention weights with all neighbor nodes. So every node attends to all other nodes. Its a bit like the transformer that attends to all words.
      This paper might also be interesting:
      arxiv.org/abs/2105.14491

  • @muhammadwaqas-gs1sp
    @muhammadwaqas-gs1sp 3 года назад

    Brilliant video 👍👍👍

  • @GaoyuanFanboy123
    @GaoyuanFanboy123 Год назад

    please use brackets and multiplication signs between matrices so i can map the mathematical formula to the visualization

  • @dmitrivillevald9274
    @dmitrivillevald9274 3 года назад

    Thank you for the great video! I wanted to ask - how is training of this network performed when the instances (input graphs) have varying number of nodes and/or adjacency matrix? It seems that W would not depend on the number of nodes (as its shape is 4 node features x 8 node embeddings) but shape of attention weight matrix Wa would (as its shape is proportional to the number of edges connecting node 1 with its neighbors.)

    • @DeepFindr
      @DeepFindr  3 года назад +2

      Hi! The attention weight matrix has always the same shape. The input shape is twice the node embedding size because it always takes two neighbor - combinations and predicts the attention coefficient for them. Of course if you have more connected nodes, you will have more of these combinations, but you can think of it like the batch dimension increases, but not the input dimension.
      For instance you have node embeddings of size 3. Then the input for the fully connected network is for instance [0.5, 1, 1, 0.6, 2, 1], so the concatenated node embeddings of two neighbors (size=3+3). It doesn't matter how many of these you input into the attention weight matrix.
      If you have 3 neighbors for a node it would look like this:
      [0.5, 1, 1, 0.6, 2, 1]
      [0.5, 1, 1, 0.7, 3, 2]
      [0.5, 1, 1, 0.8, 4, 3]
      The output are then 3 attention coefficients for each of the neighbors.
      Hope this makes sense :)

    •  3 года назад

      @@DeepFindr If graph sizes are already different, I mean if one have graph_1 that has 2200 nodes(that results in 2200,2200 adj. matrix, and graph_2 has 3000 nodes (3000,3000 adj matrix), you can zero pad graph_1 to 3000. This way you'll have fixed size of input for graph_1 and graph_2. Zero padding will create dummy nodes with no connection. So the sum with the neighboring nodes will be 0. And having dummy features for dummy nodes, you'll end up with fixed size graphs.

    • @DeepFindr
      @DeepFindr  3 года назад

      Hi, yes that's true! But for the attention mechanism used here no fixed graph size is required. It also works for a different number of nodes.
      But yes padding is a good idea to get the same shapes :)

  • @sangramkapre
    @sangramkapre 2 года назад +2

    Awesome video! Quick question: do you have a video explaining Cluster-GCN? And if yes, do you know if similar clustering idea can be applied to other networks (like GAT) to be able to train the model on large graphs? Thanks!

  • @imalive404
    @imalive404 3 года назад

    Great Explanation! As you pointed out this is one way of attention mechanism. Can you also provide references to other attention mechanisms.

    • @DeepFindr
      @DeepFindr  3 года назад

      Hi! The video in the description from this other channel explains the general attention mechanism used in transformers quite well :) or do you look for other attention mechanisms in GNNs?

    • @imalive404
      @imalive404 3 года назад

      @@DeepFindr yes thanks for sharing that too in the video. I was curious about the attention mechanisms on gnn

    • @DeepFindr
      @DeepFindr  3 года назад +1

      OK :)
      In my next video (of the current GNN series) I will also Quickly talk about Graph Transformers. There the attention coefficients are calculated with a dot product of keys and queries.
      I hope to upload this video this or next week :)

  • @MariaPirozhkova
    @MariaPirozhkova Год назад

    Hi! Are what you explain in the "Basics" and the message-passing concept the same things?

    • @DeepFindr
      @DeepFindr  Год назад

      Yes, they are the same thing :) passing messages is in the end nothing else but multiplying with the adjacency matrix. It's just a common term to better illustrate how the information is shared :)

  • @roufaidalaidi8597
    @roufaidalaidi8597 2 года назад

    Thanks a lot. Your videos are really helpful. I have a few questions regarding the case of weighted graphs. Would attention still be useful if the edges are weighted? If so, how to pass edge wights to the attention network? Can you suggest a paper doing that?

    • @DeepFindr
      @DeepFindr  2 года назад +1

      The GAT layer of PyG supports edge features but no edge weights. Therefore I would simply treat the weights as one dimensional edge features.
      The attention then additionally considered these weights.
      Probably the learned attention weights and the edge weights are sort of correlated, but I think it won't harm to include them for the attention calculation. Maybe the attention mechanism can learn even better scores for the aggregation :) I would just give it a try and see what happens. For example compare RGCN + edge weights with GAT + edge features.

    • @roufaidalaidi8597
      @roufaidalaidi8597 2 года назад

      @@DeepFindr thanks a lot for the reply.

  • @abhishekomi1573
    @abhishekomi1573 2 года назад

    I am following your playlist on GNN and this is the best content I get as of now.
    I have a CSV file and want to apply GNN on it but I don't understand how to find the edge features from the CSV file

    • @DeepFindr
      @DeepFindr  2 года назад +2

      Thanks! Did you see my latest 2 videos? They show how to convert a CSV file to a graph dataset. Maybe it helps you to get started :)

    • @abhishekomi1573
      @abhishekomi1573 2 года назад

      @@DeepFindr thanks, hope i will get my answer :-)

  • @james.oswald
    @james.oswald 3 года назад

    Great Video!

  • @bennicholl7643
    @bennicholl7643 2 года назад

    How is the adjacency matrix derived?

    • @DeepFindr
      @DeepFindr  2 года назад

      Hi, what exactly do you mean by derived? :)

    • @bennicholl7643
      @bennicholl7643 2 года назад

      @@DeepFindr What criteria decides what feature vector is zero'd out?

    • @DeepFindr
      @DeepFindr  2 года назад

      This depends on the input graph. For the molecule it's simple the atoms that are not connected with a specific atoms.
      All nodes that are not connected to a specific node have a 0 in the adjacency matrix entries.

  • @yusufani8
    @yusufani8 2 года назад

    Amazing thank you 🤩

  • @Kevoshea
    @Kevoshea 8 месяцев назад

    great video, thanks

  • @hengdezhu2832
    @hengdezhu2832 3 года назад

    Thanks a lot for the excellent tutorial. Just a quick question, when training the single layer attention network, what are the lables of input? How this single layer network is trained?

    • @DeepFindr
      @DeepFindr  3 года назад +1

      Thanks!
      Typically you train it with your custom problem. So the embeddings will be specific to your use-case. For example if you want to classify molecules, then the loss of this classification problem is used to optimize the layer. The labels are then the classes.
      It is however also possible to train universal embeddings. This can be done by using a distance metric such as cosine distance. The idea is that similar inputs should lead to similar embeddings and the labels would then be the distance between graphs.
      With both options the weights in the attention layer can be optimized.
      It is also possible to train GNNs in an unsupervised fashion, there exist different approaches in the literature.
      Hope this answers the question :)

    • @hengdezhu2832
      @hengdezhu2832 3 года назад

      @@DeepFindr Thanks! Sorry, my question might be confusing. For the node classification task, if we use the distance metrics between nodes as labels to train the weights of attention layer, then I think the attention layer that computes attention coefficient is not needed. Because we can get the importance by computing the distance metrics. I wonder how we can train weights of the shared attentional mechanism. Thanks again!

    • @DeepFindr
      @DeepFindr  3 года назад +1

      Yes, you are right. The attention mechanism using the dot product will also lead to similar embeddings for nodes that share the same neighborhood.
      However the difference is that the attention mechanism is local - it only calculates the attention coefficient for the neighboring nodes.
      Using the distance as targets can however be applied to all nodes in the input graph.
      But I agree, the various GNN layers might be differently useful depending on the application.

    • @hengdezhu2832
      @hengdezhu2832 3 года назад

      Got it! Thanks again!

  • @טסטטסט-ג3ש
    @טסטטסט-ג3ש 2 года назад

    Very understandable! Thank you.
    Can you share your presentation?

    • @DeepFindr
      @DeepFindr  2 года назад

      Sure! Can you send me an email to deepfindr@gmail.com and I'll attach it :) thx

    • @keteverma3441
      @keteverma3441 2 года назад +1

      @@DeepFindr Hey I have also sent you an email, could you please attach the presentation?

  • @nastaranmarzban1419
    @nastaranmarzban1419 2 года назад

    Hi, sorry to bother you
    I have a question
    What's the difference between soft-attention and self-attention?

    • @DeepFindr
      @DeepFindr  2 года назад

      Hi! There is soft vs hard attention, you can search for it on Google.
      For self attention there are great tutorials, such as this one peltarion.com/blog/data-science/self-attention-video

  • @البداية-ذ1ذ
    @البداية-ذ1ذ 3 года назад

    Hello ,thanks for sharing, could you plz explain how you get learnable method,is it matrix randomly chosen or there is method behind,and is this equal to lablacian method.
    One more question ,your embedding only on node level ,right

    • @DeepFindr
      @DeepFindr  3 года назад +1

      Hi, the learnable weight matrix is randomly initialized and then updated through back propagation. It's just a classical fully-connected neural network layer.
      Yes the embedding is on the node level :)