How to explain Graph Neural Networks (with XAI)

แชร์
ฝัง
  • เผยแพร่เมื่อ 28 มิ.ย. 2024
  • ▬▬ Papers ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
    GNNExplainer: arxiv.org/abs/1903.03894
    Survey: arxiv.org/abs/2012.15445
    ▬▬ Used Music ▬▬▬▬▬▬▬▬▬▬▬
    Music from Uppbeat (free for Creators!):
    uppbeat.io/t/prigida/moonshine
    License code: ZKXJTIWE0MVWWKFJ
    ▬▬ Timestamps ▬▬▬▬▬▬▬▬▬▬▬
    00:00 Introduction
    00:28 XAI for other data
    01:20 XAI + GNNs
    02:30 Overview of methods
    06:21 GNNExplainer
    09:15 Mathematical details
    13:28 Example
    13:54 GNNExplainer extensions
    14:37 Python library
    ▬▬ Support me if you like 🌟
    ►Link to this channel: bit.ly/3zEqL1W
    ►Support me on Patreon: bit.ly/2Wed242
    ►Buy me a coffee on Ko-Fi: bit.ly/3kJYEdl
    ▬▬ My equipment 💻
    - Microphone: amzn.to/3DVqB8H
    - Microphone mount: amzn.to/3BWUcOJ
    - Monitors: amzn.to/3G2Jjgr
    - Monitor mount: amzn.to/3AWGIAY
    - Height-adjustable table: amzn.to/3aUysXC
    - Ergonomic chair: amzn.to/3phQg7r
    - PC case: amzn.to/3jdlI2Y
    - GPU: amzn.to/3AWyzwy
    - Keyboard: amzn.to/2XskWHP
    - Bluelight filter glasses: amzn.to/3pj0fK2
  • วิทยาศาสตร์และเทคโนโลยี

ความคิดเห็น • 45

  • @user-hl5sk1oj1m
    @user-hl5sk1oj1m 2 ปีที่แล้ว +3

    I can't believe I can watch these high quality videos that exactly fit my interest on youtube. thank you so much

    • @DeepFindr
      @DeepFindr  2 ปีที่แล้ว

      Happy that you like it :)

  • @bayesian7404
    @bayesian7404 หลายเดือนก่อน

    Great talk. It’s very clearly explained and well presented.

  • @pharujrajborirug4934
    @pharujrajborirug4934 2 ปีที่แล้ว +3

    Great series. So much apprciate it

  • @ishgirwan
    @ishgirwan 2 ปีที่แล้ว +4

    Nicely explained. Thanks :)

  • @dtr_cpg
    @dtr_cpg 2 ปีที่แล้ว

    Super, thank you!

  • @ThePritt12
    @ThePritt12 2 ปีที่แล้ว +3

    12:40 The sigmoid function maps to [0,1] not to {0,1}. I think they also write in the paper that the go for a continuous approximation. "σ denotes the sigmoid that maps the mask to [0,1]^n×n." Not sure if the binarization happens elsewhere though

  • @xiaohaolin6464
    @xiaohaolin6464 2 ปีที่แล้ว

    SO GOOD!!!

  • @clayouyang2157
    @clayouyang2157 2 ปีที่แล้ว +2

    Nice video

  • @gnn816
    @gnn816 ปีที่แล้ว

    Great video! Are these techniques for GNN explanations applicable to node - regression tasks (e.g. traffic prediction, where nodes correspond to traffic flow or speed). Most of research papers refer to node classification tasks rather than regression.

  • @sherazbaloch1642
    @sherazbaloch1642 ปีที่แล้ว

    Thank you : )

  • @torstenschindler1965
    @torstenschindler1965 2 ปีที่แล้ว +3

    Great video!
    Which of the GNN explanation methods perform best in the “Benchmarks for interpretation of QSAR models”?
    Does GNNExplainer also work with an evidential layer as in “Evidential Deep Learning for Guided Molecular Property Prediction and Discovery”?

    • @DeepFindr
      @DeepFindr  2 ปีที่แล้ว +1

      Thank you!
      I haven't found any comparison based on the QSAR benchmarks. But it would be very helpful indeed to have a comparison based on a established standard. In the paper I've mentioned in the video, they also analyze the methods based on the standard metrics in XAI such as fidelity ect.
      The GNNExplainer only learns a mask on the input graph and queries the model with the perturbed instance. Therefore I would argue that it is irrelevant which model architecture the GNN has, as only the inputs and outputs are needed.
      The only additional thing is that the computation graph is required, but as evidential layers follow the feature extraction layers, this shouldn't be a problem.
      However I haven't tried it, this is just my best guess. :)

    • @torstenschindler1965
      @torstenschindler1965 2 ปีที่แล้ว

      @@DeepFindr Is there also a rational AI method for GNNs, which directly combines the explainability with uncertainty estimation?
      Thanks.

    • @DeepFindr
      @DeepFindr  2 ปีที่แล้ว +1

      Hello, I think uncertainty quantification and explainable AI are not really connected yet. But I've seen that research in that direction is starting, such as in: www.researchgate.net/publication/352254836_Introducing_Uncertainty_into_Explainable_AI_Methods
      However for GNN explainability this has not arrived as far as I know.
      I dont know which model you have, but you can also use special architectures, like quantile regression, that allow you to measure uncertainty. In advance to that, you can apply the GNN XAI methods without adjustments because of specific layers.

  • @VLM234
    @VLM234 2 ปีที่แล้ว +1

    Great video...

  • @jinahere
    @jinahere 2 ปีที่แล้ว

    Can we have nodes with different type of values in GNN? like, image as input for one node, audio as input for another node, signals as input for the third one, text as input for another one.

  • @_MarcoDelFuturo_
    @_MarcoDelFuturo_ ปีที่แล้ว

    I have a graph having nodes with labels 1, 2, 3 or 4. I need to explain why the nodes with label 1 are predicted to be 1 and so on. Since im not interested on one specific node, do I need to apply Multi-instance explanation? And since i think that i get node feat mask and edgme mask how to understand which nodes contribute on predictions?

  • @GPCImpulse
    @GPCImpulse 2 ปีที่แล้ว

    Really cool video!
    Regarding the DIG library, do you have any information or material on how to use the DIG implementations of GNN explanation methods on pytorch geometric models?

    • @DeepFindr
      @DeepFindr  2 ปีที่แล้ว

      Hi! Thank you :)
      There is a folder with examples: github.com/divelab/DIG/tree/dig/examples/xgraph
      They are using models like GCN_2l, which are pre-implemented in the library. You can replace these models with any PyG model :)
      Hope this helps

    • @GPCImpulse
      @GPCImpulse 2 ปีที่แล้ว

      @@DeepFindr Yes I saw that, the issue witht that is that they use predefined models from their own library, which are implemented differently than more simple stock PyG models. I'm wondering if DIG is compatible with arbitrary PyG models.

    • @DeepFindr
      @DeepFindr  2 ปีที่แล้ว

      It should be I guess. :)
      Have you tried it with a simple model?
      The custom models in DIG all inherit from PyG layers so it should work fine :)

    • @GPCImpulse
      @GPCImpulse 2 ปีที่แล้ว

      @@DeepFindr Yeah I tried, the issue is argument parsing. DIG has this weird thing where some functions expect a forward call to take a data object, while others (including the pre-defined models themselves) expect the raw x and edge index. They appear to handle this with a arguments_read function that can handle both. I'll try using that same approach and see if it works.

    • @DeepFindr
      @DeepFindr  2 ปีที่แล้ว

      Mhh OK I see. I didn't use the library yet, I just found that it exists :D good luck anyways!

  • @younesselbrag
    @younesselbrag 2 ปีที่แล้ว

    Which software are you using to make all your representations Lectures Videos . If you can tell me please
    😊😊 thank you for
    Your sharing your contents are very knowledgeable for me

    • @DeepFindr
      @DeepFindr  2 ปีที่แล้ว

      Hi! Thanks :)
      Usually I use active presenter, it's free :) besides that I started to have a look at DaVinci resolve (also free), to level up my video quality :D

  • @kk008
    @kk008 ปีที่แล้ว

    Can this GNNExplainer work on heterogeneous gnns?

  • @chingchangjason
    @chingchangjason 2 ปีที่แล้ว

    Can you have explainability for edge features? Thanks!

    • @DeepFindr
      @DeepFindr  2 ปีที่แล้ว

      I have not seen an implementation of that. You would need to adjust this code: pytorch-geometric.readthedocs.io/en/latest/_modules/torch_geometric/nn/models/gnn_explainer.html
      There is already an edge mask - the task would be to also consider edge features that have an additional dimension but I think you could get guidance by the edge mask.
      best regards

  • @saberdeilamie6669
    @saberdeilamie6669 2 ปีที่แล้ว

    hi, I want to know how the links between nodes are defined? for example, if I have 1000 nodes with different features, do I have to define the links between them, or no the graph itself does that for me?

    • @DeepFindr
      @DeepFindr  2 ปีที่แล้ว +1

      Yes you have to define the links :) I have just updated a video on how to build a graph dataset (the second last video I've uploaded). I think this is what you are looking for :)

    • @saberdeilamie6669
      @saberdeilamie6669 2 ปีที่แล้ว

      @@DeepFindr thanks a million.

  • @ThePritt12
    @ThePritt12 2 ปีที่แล้ว

    7:36 Gc can't be the computation graph you plotted because the adjacency matrix of Gc is nxn (source: paper), where n is the number of nodes in the original graph. Your Gc contains more than n nodes. I also have no idea what Gc really is because they do not define it properly in the paper, but this is not it.

    • @DeepFindr
      @DeepFindr  2 ปีที่แล้ว +1

      You are right, the computation graph contains more nodes. I think however that you can decompose it layer-wise, which gives you a nxn matrix in each layer.
      Later in the video at ~12 I also gave some comments on what Gc could be.

    • @ThePritt12
      @ThePritt12 2 ปีที่แล้ว

      @@DeepFindr I also thought what you show at ~12 is Gc but it does not match Figure 2 of the paper. And I wrote the author of the paper he said " [...]the subgraph of the computation graph of the GNN [...] consist of edges (in the sense of message passing; might not correspond to actual edges in the input graph, if you use a multi-hop GNN etc.)." So this can't also be it :D super confusing!

  • @peasant12345
    @peasant12345 6 หลายเดือนก่อน

    i would not call it explainable nn. It is just a way to debug training😂

  • @jinavarghese7029
    @jinavarghese7029 2 ปีที่แล้ว

    Can we have nodes with different type of values in GNN? like, image as input for one node, audio as input for another node, signals as input for the third one, text as input for another one.

    • @DeepFindr
      @DeepFindr  2 ปีที่แล้ว

      Hi! Yes generally it is possible to have different signals for different nodes. This is called a heterogenous graph, there are also some tutorials on this.
      However I have not seen any support for e.g. Three dimensional image tensors as node features. Typically the node features need to be two dimensional vectors, so probably some preprocessing steps would be required (such as converting images into embeddings with a pretrained image model).