HOME

TheInfoList



OR:

A graph neural network (GNN) belongs to a class of
artificial neural network Artificial neural networks (ANNs), usually simply called neural networks (NNs) or neural nets, are computing systems inspired by the biological neural networks that constitute animal brains. An ANN is based on a collection of connected units ...
s for processing data that can be represented as
graphs Graph may refer to: Mathematics *Graph (discrete mathematics), a structure made of vertices and edges **Graph theory, the study of such graphs and their properties * Graph (topology), a topological space resembling a graph in the sense of discr ...
. In the more general subject of "geometric deep learning", certain existing neural network architectures can be interpreted as GNNs operating on suitably defined graphs. A
convolutional neural network In deep learning, a convolutional neural network (CNN, or ConvNet) is a class of artificial neural network (ANN), most commonly applied to analyze visual imagery. CNNs are also known as Shift Invariant or Space Invariant Artificial Neural Netwo ...
layer, in the context of
computer vision Computer vision is an Interdisciplinarity, interdisciplinary scientific field that deals with how computers can gain high-level understanding from digital images or videos. From the perspective of engineering, it seeks to understand and automate t ...
, can be seen as a GNN applied to graphs whose nodes are
pixel In digital imaging, a pixel (abbreviated px), pel, or picture element is the smallest addressable element in a raster image, or the smallest point in an all points addressable display device. In most digital display devices, pixels are the s ...
s and only adjacent pixels are connected by edges in the graph. A
transformer A transformer is a passive component that transfers electrical energy from one electrical circuit to another circuit, or multiple circuits. A varying current in any coil of the transformer produces a varying magnetic flux in the transformer' ...
layer, in
natural language processing Natural language processing (NLP) is an interdisciplinary subfield of linguistics, computer science, and artificial intelligence concerned with the interactions between computers and human language, in particular how to program computers to proc ...
, can be seen as a GNN applied to
complete graph In the mathematical field of graph theory, a complete graph is a simple undirected graph in which every pair of distinct vertices is connected by a unique edge. A complete digraph is a directed graph in which every pair of distinct vertices ...
s whose nodes are
words A word is a basic element of language that carries an objective or practical meaning, can be used on its own, and is uninterruptible. Despite the fact that language speakers often have an intuitive grasp of what a word is, there is no conse ...
or tokens in a passage of
natural language In neuropsychology, linguistics, and philosophy of language, a natural language or ordinary language is any language that has evolved naturally in humans through use and repetition without conscious planning or premeditation. Natural languag ...
text. The key design element of GNNs is the use of ''pairwise message passing'', such that graph nodes iteratively update their representations by exchanging information with their neighbors. Since their inception, several different GNN architectures have been proposed, which implement different flavors of message passing, started by recursive or convolutional constructive approaches. , whether it is possible to define GNN architectures "going beyond" message passing, or if every GNN can be built on message passing over suitably defined graphs, is an open research question. Relevant application domains for GNNs include
Natural Language Processing Natural language processing (NLP) is an interdisciplinary subfield of linguistics, computer science, and artificial intelligence concerned with the interactions between computers and human language, in particular how to program computers to proc ...
,
social networks A social network is a social structure made up of a set of social actors (such as individuals or organizations), sets of dyadic ties, and other social interactions between actors. The social network perspective provides a set of methods for ...
, citation networks,
molecular biology Molecular biology is the branch of biology that seeks to understand the molecular basis of biological activity in and between cells, including biomolecular synthesis, modification, mechanisms, and interactions. The study of chemical and phys ...
, chemistry,
physics Physics is the natural science that studies matter, its fundamental constituents, its motion and behavior through space and time, and the related entities of energy and force. "Physical science is that department of knowledge which rel ...
and
NP-hard In computational complexity theory, NP-hardness ( non-deterministic polynomial-time hardness) is the defining property of a class of problems that are informally "at least as hard as the hardest problems in NP". A simple example of an NP-hard pr ...
combinatorial optimization Combinatorial optimization is a subfield of mathematical optimization that consists of finding an optimal object from a finite set of objects, where the set of feasible solutions is discrete or can be reduced to a discrete set. Typical combi ...
problems. Several
open source Open source is source code that is made freely available for possible modification and redistribution. Products include permission to use the source code, design documents, or content of the product. The open-source model is a decentralized sof ...
libraries A library is a collection of materials, books or media that are accessible for use and not just for display purposes. A library provides physical (hard copies) or digital access (soft copies) materials, and may be a physical location or a vir ...
implementing graph neural networks are available, such as PyTorch Geometric (
PyTorch PyTorch is a machine learning framework based on the Torch library, used for applications such as computer vision and natural language processing, originally developed by Meta AI and now part of the Linux Foundation umbrella. It is free and op ...
), TensorFlow GNN (
TensorFlow TensorFlow is a free and open-source software library for machine learning and artificial intelligence. It can be used across a range of tasks but has a particular focus on training and inference of deep neural networks. "It is machine learning ...
), jraph ( Google JAX), and GraphNeuralNetworks.jl (
Julia Julia is usually a feminine given name. It is a Latinate feminine form of the name Julio and Julius. (For further details on etymology, see the Wiktionary entry "Julius".) The given name ''Julia'' had been in use throughout Late Antiquity (e ...
,
Flux Flux describes any effect that appears to pass or travel (whether it actually moves or not) through a surface or substance. Flux is a concept in applied mathematics and vector calculus which has many applications to physics. For transport ...
).


Architecture

The architecture of a generic GNN implements the following fundamental layers: # Permutation equivariant: a permutation equivariant layer
maps A map is a symbolic depiction emphasizing relationships between elements of some space, such as objects, regions, or themes. Many maps are static, fixed to paper or some other durable medium, while others are dynamic or interactive. Althoug ...
a representation of a graph into an updated representation of the same graph. In the literature, permutation equivariant layers are implemented via pairwise message passing between graph nodes. Intuitively, in a message passing layer, nodes update their representations by aggregating the messages received from their immediate neighbours. As such, each message passing layer increases the receptive field of the GNN by one hop. # Local pooling: a local pooling layer coarsens the graph via
downsampling In digital signal processing, downsampling, compression, and decimation are terms associated with the process of ''resampling'' in a multi-rate digital signal processing system. Both ''downsampling'' and ''decimation'' can be synonymous with ''com ...
. Local pooling is used to increase the receptive field of a GNN, in a similar fashion to pooling layers in
convolutional neural network In deep learning, a convolutional neural network (CNN, or ConvNet) is a class of artificial neural network (ANN), most commonly applied to analyze visual imagery. CNNs are also known as Shift Invariant or Space Invariant Artificial Neural Netwo ...
s. Examples include k-nearest neighbours pooling, top-k pooling, and self-attention pooling. # Global pooling: a global pooling layer, also known as ''readout'' layer, provides fixed-size representation of the whole graph. The global pooling layer must be permutation invariant, such that permutations in the ordering of graph nodes and edges do not alter the final output. Examples include element-wise sum, mean or maximum. It has been demonstrated that GNNs cannot be more expressive than the Weisfeiler–Leman Graph Isomorphism Test. In practice, this means that there exist different graph structures (e.g.,
molecules A molecule is a group of two or more atoms held together by attractive forces known as chemical bonds; depending on context, the term may or may not include ions which satisfy this criterion. In quantum physics, organic chemistry, and bioc ...
with the same
atoms Every atom is composed of a nucleus and one or more electrons bound to the nucleus. The nucleus is made of one or more protons and a number of neutrons. Only the most common variety of hydrogen has no neutrons. Every solid, liquid, gas ...
but different bonds) that cannot be distinguished by GNNs. More powerful GNNs operating on higher-dimension geometries such as
simplicial complex In mathematics, a simplicial complex is a set composed of points, line segments, triangles, and their ''n''-dimensional counterparts (see illustration). Simplicial complexes should not be confused with the more abstract notion of a simplicial ...
es can be designed. , whether or not future architectures will overcome the message passing primitive is an open research question.


Message passing layers

Message passing layers are permutation-equivariant layers mapping a graph into an updated representation of the same graph. Formally, they can be expressed as message passing neural networks (MPNNs). Let G = (V,E) be a
graph Graph may refer to: Mathematics *Graph (discrete mathematics), a structure made of vertices and edges **Graph theory, the study of such graphs and their properties *Graph (topology), a topological space resembling a graph in the sense of discre ...
, where V is the node set and E is the edge set. Let N_u be the
neighbourhood A neighbourhood (British English, Irish English, Australian English and Canadian English) or neighborhood (American English; American and British English spelling differences, see spelling differences) is a geographically localised community ...
of some node u \in V. Additionally, let \mathbf_u be the features of node u \in V, and \mathbf_ be the features of edge (u, v) \in E. An MPNN
layer Layer or layered may refer to: Arts, entertainment, and media *Layers (Kungs album), ''Layers'' (Kungs album) *Layers (Les McCann album), ''Layers'' (Les McCann album) *Layers (Royce da 5'9" album), ''Layers'' (Royce da 5'9" album) *"Layers", the ...
can be expressed as follows: :\mathbf_u = \phi \left( \mathbf_u, \bigoplus_ \psi(\mathbf_u, \mathbf_v, \mathbf_) \right) where \phi and \psi are
differentiable functions In mathematics, a differentiable function of one real variable is a function whose derivative exists at each point in its domain. In other words, the graph of a differentiable function has a non- vertical tangent line at each interior point in ...
(e.g.,
artificial neural network Artificial neural networks (ANNs), usually simply called neural networks (NNs) or neural nets, are computing systems inspired by the biological neural networks that constitute animal brains. An ANN is based on a collection of connected units ...
s), and \bigoplus is a
permutation In mathematics, a permutation of a set is, loosely speaking, an arrangement of its members into a sequence or linear order, or if the set is already ordered, a rearrangement of its elements. The word "permutation" also refers to the act or p ...
invariant aggregation operator that can accept an arbitrary number of inputs (e.g., element-wise sum, mean, or max). In particular, \phi and \psi are referred to as ''update'' and ''message'' functions, respectively. Intuitively, in an MPNN computational block, graph nodes ''update'' their representations by ''aggregating'' the ''messages'' received from their neighbours. The outputs of one or more MPNN layers are node representations \mathbf_u for each node u \in V in the graph. Node representations can be employed for any downstream task, such as node/graph
classification Classification is a process related to categorization, the process in which ideas and objects are recognized, differentiated and understood. Classification is the grouping of related facts into classes. It may also refer to: Business, organizat ...
or edge prediction. Graph nodes in an MPNN update their representation aggregating information from their immediate neighbours. As such, stacking n MPNN layers means that one node will be able to communicate with nodes that are at most n "hops" away. In principle, to ensure that every node receives information from every other node, one would need to stack a number of MPNN layers equal to the graph
diameter In geometry, a diameter of a circle is any straight line segment that passes through the center of the circle and whose endpoints lie on the circle. It can also be defined as the longest chord of the circle. Both definitions are also valid fo ...
. However, stacking many MPNN layers may cause issues such as oversmoothing and oversquashing. Oversmoothing refers to the issue of node representations becoming indistinguishable. Oversquashing refers to the bottleneck that is created by squeezing long-range dependencies into fixed-size representations. Countermeasures such as skip connections (as in
residual neural network A residual neural network (ResNet) is an artificial neural network (ANN). It is a gateless or open-gated variant of the HighwayNet, the first working very deep feedforward neural network A feedforward neural network (FNN) is an artificial neu ...
s), gated update rules and jumping knowledge can mitigate oversmoothing. Modifying the final layer to be a fully-adjacent layer, i.e., by considering the graph as a
complete graph In the mathematical field of graph theory, a complete graph is a simple undirected graph in which every pair of distinct vertices is connected by a unique edge. A complete digraph is a directed graph in which every pair of distinct vertices ...
, can mitigate oversquashing in problems where long-range dependencies are required. Other "flavours" of MPNN have been developed in the literature, such as graph convolutional networks and graph attention networks, whose definitions can be expressed in terms of the MPNN formalism.


Graph convolutional network

The graph convolutional network (GCN) was first introduced by Thomas Kipf and
Max Welling Max Welling (born 1968) is a Dutch computer scientist in machine learning at the University of Amsterdam. In August 2017, the university spin-off ''Scyfer BV'', co-founded by Welling, was acquired by Qualcomm. He has since then served as a Vice P ...
in 2017. A GCN layer defines a first-order approximation of a localized spectral filter on graphs. GCNs can be understood as a generalization of
convolutional neural network In deep learning, a convolutional neural network (CNN, or ConvNet) is a class of artificial neural network (ANN), most commonly applied to analyze visual imagery. CNNs are also known as Shift Invariant or Space Invariant Artificial Neural Netwo ...
s to graph-structured data. The formal expression of a GCN layer reads as follows: :\mathbf = \sigma\left(\tilde^ \tilde \tilde^ \mathbf \mathbf\right) where \mathbf is the matrix of node representations \mathbf_u, \mathbf is the matrix of node features \mathbf_u, \sigma(\cdot) is an
activation function In artificial neural networks, the activation function of a node defines the output of that node given an input or set of inputs. A standard integrated circuit can be seen as a digital network of activation functions that can be "ON" (1) or " ...
(e.g., ReLU), \tilde is the graph
adjacency matrix In graph theory and computer science, an adjacency matrix is a square matrix used to represent a finite graph. The elements of the matrix indicate whether pairs of vertices are adjacent or not in the graph. In the special case of a finite simple ...
with the addition of self-loops, \tilde is the graph
degree matrix In the mathematical field of algebraic graph theory, the degree matrix of an undirected graph is a diagonal matrix which contains information about the degree of each vertex—that is, the number of edges attached to each vertex.. It is used togethe ...
with the addition of self-loops, and \mathbf is a matrix of trainable parameters. In particular, let \mathbf be the graph adjacency matrix: then, one can define \tilde = \mathbf + \mathbf and \tilde_ = \sum_ \tilde_, where \mathbf denotes the
identity matrix In linear algebra, the identity matrix of size n is the n\times n square matrix with ones on the main diagonal and zeros elsewhere. Terminology and notation The identity matrix is often denoted by I_n, or simply by I if the size is immaterial ...
. This normalization ensures that the
eigenvalue In linear algebra, an eigenvector () or characteristic vector of a linear transformation is a nonzero vector that changes at most by a scalar factor when that linear transformation is applied to it. The corresponding eigenvalue, often denot ...
s of \tilde^ \tilde \tilde^ are bounded in the range
, 1 The comma is a punctuation mark that appears in several variants in different languages. It has the same shape as an apostrophe or single closing quotation mark () in many typefaces, but it differs from them in being placed on the baseline o ...
/math>, avoiding numerical instabilities and exploding/vanishing gradients. A limitation of GCNs is that they do not allow multidimensional edge features \mathbf_. It is however possible to associate scalar weights w_ to each edge by imposing A_ = w_, i.e., by setting each nonzero entry in the adjacency matrix equal to the weight of the corresponding edge.


Graph attention network

The graph attention network (GAT) was introduced by Petar Veličković et al. in 2018. Graph attention network is a combination of a graph neural network and an attention layer. The implementation of attention layer in graphical neural networks helps provide attention or focus to the important information from the data instead of focusing on the whole data. A multi-head GAT layer can be expressed as follows: : \mathbf_u = \overset \sigma \left(\sum_ \alpha_ \mathbf^k \mathbf_v\right) where K is the number of
attention Attention is the behavioral and cognitive process of selectively concentrating on a discrete aspect of information, whether considered subjective or objective, while ignoring other perceivable information. William James (1890) wrote that "Att ...
heads, \Big\Vert denotes vector concatenation, \sigma(\cdot) is an
activation function In artificial neural networks, the activation function of a node defines the output of that node given an input or set of inputs. A standard integrated circuit can be seen as a digital network of activation functions that can be "ON" (1) or " ...
(e.g., ReLU), \alpha_ are attention coefficients, and W^k is a matrix of trainable parameters for the k-th attention head. For the final GAT layer, the outputs from each attention head are averaged before the application of the activation function. Formally, the final GAT layer can be written as: : \mathbf_u = \sigma \left(\frac\sum_^K \sum_ \alpha_ \mathbf^k \mathbf_v\right)
Attention Attention is the behavioral and cognitive process of selectively concentrating on a discrete aspect of information, whether considered subjective or objective, while ignoring other perceivable information. William James (1890) wrote that "Att ...
in Machine Learning is a technique that mimics cognitive attention. In the context of learning on graphs, the attention coefficient \alpha_ measures ''how important'' is node u \in V to node v \in V. Normalized attention coefficients are computed as follows: :\alpha_ = \frac where \mathbf is a vector of learnable weights, \cdot^T indicates transposition, and \text is a modified ReLU activation function. Attention coefficients are normalized to make them easily comparable across different nodes. A GCN can be seen as a special case of a GAT where attention coefficients are not learnable, but fixed and equal to the edge weights w_.


Gated graph sequence neural network

The gated graph sequence neural network (GGS-NN) was introduced by Yujia Li et al. in 2015. The GGS-NN extends the GNN formulation by Scarselli et al. to output sequences. The message passing framework is implemented as an update rule to a
gated recurrent unit Gated recurrent units (GRUs) are a gating mechanism in recurrent neural networks, introduced in 2014 by Kyunghyun Cho et al. The GRU is like a long short-term memory (LSTM) with a forget gate, but has fewer parameters than LSTM, as it lacks an o ...
(GRU) cell. A GGS-NN can be expressed as follows: :\mathbf_u^ = \mathbf_u \, \Vert \, \mathbf :\mathbf_u^ = \sum_ \mathbf \mathbf_v :\mathbf_u^ = \text(\mathbf_u^, \mathbf_u^) where \Vert denotes vector concatenation, \mathbf is a vector of zeros, \mathbf is a matrix of learnable parameters, \text is a GRU cell, and l denotes the sequence index. In a GGS-NN, the node representations are regarded as the hidden states of a GRU cell. The initial node features \mathbf_u^ are zero-padded up to the hidden state dimension of the GRU cell. The same GRU cell is used for updating representations for each node.


Local pooling layers

Local pooling layers coarsen the graph via downsampling. We present here several learnable local pooling strategies that have been proposed. For each cases, the input is the initial graph is represented by a matrix \mathbf of node features, and the graph adjacency matrix \mathbf. The output is the new matrix \mathbf'of node features, and the new graph adjacency matrix \mathbf'.


Top-k pooling

We first set \mathbf = \frac where \mathbf is a learnable projection vector. The projection vector \mathbf computes a scalar projection value for each graph node. The top-k pooling layer can then be formalised as follows: :\mathbf' = (\mathbf \odot \text(\mathbf))_ :\mathbf' = \mathbf_ where \mathbf = \text_k(\mathbf) is the subset of nodes with the top-k highest projection scores, \odot denotes element-wise
matrix multiplication In mathematics, particularly in linear algebra, matrix multiplication is a binary operation that produces a matrix from two matrices. For matrix multiplication, the number of columns in the first matrix must be equal to the number of rows in the ...
, and \text(\cdot) is the
sigmoid function A sigmoid function is a mathematical function having a characteristic "S"-shaped curve or sigmoid curve. A common example of a sigmoid function is the logistic function shown in the first figure and defined by the formula: :S(x) = \frac = \ ...
. In other words, the nodes with the top-k highest projection scores are retained in the new adjacency matrix \mathbf'. The \text(\cdot) operation makes the projection vector \mathbf trainable by
backpropagation In machine learning, backpropagation (backprop, BP) is a widely used algorithm for training feedforward artificial neural networks. Generalizations of backpropagation exist for other artificial neural networks (ANNs), and for functions gener ...
, which otherwise would produce discrete outputs.


Self-attention pooling

We first set :\mathbf = \text(\mathbf, \mathbf) where \text is a generic permutation equivariant GNN layer (e.g., GCN, GAT, MPNN). The Self-attention pooling layer can then be formalised as follows: :\mathbf' = (\mathbf \odot \mathbf)_ :\mathbf' = \mathbf_ where \mathbf = \text_k(\mathbf) is the subset of nodes with the top-k highest projection scores, \odot denotes element-wise
matrix multiplication In mathematics, particularly in linear algebra, matrix multiplication is a binary operation that produces a matrix from two matrices. For matrix multiplication, the number of columns in the first matrix must be equal to the number of rows in the ...
. The self-attention pooling layer can be seen as an extension of the top-k pooling layer. Differently from top-k pooling, the self-attention scores computed in self-attention pooling account both for the graph features and the graph topology.


Applications


Protein folding

Graph neural networks are one of the main building blocks of
AlphaFold AlphaFold is an artificial intelligence (AI) program developed by DeepMind, a subsidiary of Alphabet, which performs predictions of protein structure. The program is designed as a deep learning system. AlphaFold AI software has had two major ve ...
, an artificial intelligence program developed by
Google Google LLC () is an American Multinational corporation, multinational technology company focusing on Search Engine, search engine technology, online advertising, cloud computing, software, computer software, quantum computing, e-commerce, ar ...
's
DeepMind DeepMind Technologies is a British artificial intelligence subsidiary of Alphabet Inc. and research laboratory founded in 2010. DeepMind was acquired by Google in 2014 and became a wholly owned subsidiary of Alphabet Inc, after Google's restru ...
for solving the
protein folding Protein folding is the physical process by which a protein chain is translated to its native three-dimensional structure, typically a "folded" conformation by which the protein becomes biologically functional. Via an expeditious and reprodu ...
problem in
biology Biology is the scientific study of life. It is a natural science with a broad scope but has several unifying themes that tie it together as a single, coherent field. For instance, all organisms are made up of cells that process hereditar ...
. AlphaFold achieved first place in several
CASP Critical Assessment of Structure Prediction (CASP), sometimes called Critical Assessment of Protein Structure Prediction, is a community-wide, worldwide experiment for protein structure prediction taking place every two years since 1994. CASP prov ...
competitions.


Social networks

Social networks A social network is a social structure made up of a set of social actors (such as individuals or organizations), sets of dyadic ties, and other social interactions between actors. The social network perspective provides a set of methods for ...
are a major application domain for GNNs due to their natural representation as
social graph The social graph is a graph that represents social relations between entities. In short, it is a model or representation of a social network, where the word graph has been taken from graph theory. The social graph has been referred to as "the ...
s. GNNs are used to develop recommender systems based on both
social relations A social relation or also described as a social interaction or social experience is the fundamental unit of analysis within the social sciences, and describes any voluntary or involuntary interpersonal relationship between two or more individuals ...
and item relations.


Combinatorial optimization

GNNs are used as fundamental building blocks for several combinatorial optimization algorithms. Examples include computing shortest paths or Eulerian circuits for a given graph, deriving chip placements superior or competitive to handcrafted human solutions, and improving expert-designed branching rules in
branch and bound Branch and bound (BB, B&B, or BnB) is an algorithm design paradigm for discrete and combinatorial optimization problems, as well as mathematical optimization. A branch-and-bound algorithm consists of a systematic enumeration of candidate solutio ...
.


Cyber security

When viewed as a graph, a network of computers can be analyzed with GNNs for anomaly detection. Anomalies within provenance graphs often correlate to malicious activity within the network. GNNs have been used to identify these anomalies on individual nodes and within paths to detect malicious processes, or on the edge level to detect
lateral movement Lateral movements or lateral flexions within equestrianism, have a specific meaning, used to refer to movements made by a horse where the animal is moving in a direction other than straight forward. They are used both in training and in competitio ...
.


References

{{Reflist, refs= {{Cite journal, last1=Wu, first1=Lingfei, last2=Chen, first2=Yu, last3=Shen , first3=Kai, last4=Guo, first4=Xiaojie, last5=Gao, first5=Hanning, last6=Li, first6=Shucheng, last7=Pei, first7=Jian, last8=Long, first8=Bo, date=2023, title=Graph Neural Networks for Natural Language Processing: A Survey , url=https://www.nowpublishers.com/article/Details/MAL-096, journal=Foundations and Trends in Machine Learning, volume=16, issue=2, pages=119–328, doi=10.1561/2200000096 , pmid=19068426, s2cid=206756462, issn=1941-0093, arxiv=2106.06090 {{Cite journal, last1=Wu, first1=Lingfei, last2=Cui, first2=Peng, last3=Pei , first3=Jian, last4=Zhao, first4=Liang, date=2022, title=Graph Neural Networks: Foundations, Frontiers, and Applications, url=https://graph-neural-networks.github.io/, journal=Springer Singapore, pages=725, url-access= {{Cite journal, last1=Scarselli, first1=Franco, last2=Gori, first2=Marco, last3=Tsoi , first3=Ah Chung, last4=Hagenbuchner, first4=Markus, last5=Monfardini, first5=Gabriele, date=2009, title=The Graph Neural Network Model, url=https://ieeexplore.ieee.org/document/4700287, journal=IEEE Transactions on Neural Networks, volume=20, issue=1, pages=61–80, doi=10.1109/TNN.2008.2005605 , pmid=19068426, s2cid=206756462, issn=1941-0093 {{Cite journal, last1=Micheli, first1=Alessio, title=Neural Network for Graphs: A Contextual Constructive Approach, url=https://ieeexplore.ieee.org/document/4700287, journal=IEEE Transactions on Neural Networks, year=2009 , volume=20, issue=3, pages=498–511, doi=10.1109/TNN.2008.2010350 , pmid=19193509, s2cid=17486263, issn=1045-9227 {{Cite journal, last1=Sanchez-Lengeling, first1=Benjamin, last2=Reif, first2=Emily , last3=Pearce, first3=Adam, last4=Wiltschko, first4=Alex, date=2021-09-02, title=A Gentle Introduction to Graph Neural Networks, url=https://distill.pub/2021/gnn-intro, journal=Distill, volume=6, issue=9, pages=e33 , doi=10.23915/distill.00033, issn=2476-0757, doi-access=free {{Cite journal, last1=Daigavane, first1=Ameya, last2=Ravindran, first2=Balaraman , last3=Aggarwal, first3=Gaurav, date=2021-09-02, title=Understanding Convolutions on Graphs , url=https://distill.pub/2021/understanding-gnns, journal=Distill, volume=6, issue=9, pages=e32 , doi=10.23915/distill.00032, s2cid=239678898, issn=2476-0757, doi-access=free {{Cite journal, last1=Gilmer, first1=Justin, last2=Schoenholz, first2=Samuel S. , last3=Riley, first3=Patrick F., last4=Vinyals, first4=Oriol, last5=Dahl, first5=George E., date=2017-07-17, title=Neural Message Passing for Quantum Chemistry, url=http://proceedings.mlr.press/v70/gilmer17a.html , journal=Proceedings of Machine Learning Research, language=en, pages=1263–1272, arxiv=1704.01212 {{Cite journal, last1=Kipf, first1=Thomas N, last2=Welling, first2=Max, date=2016 , title=Semi-supervised classification with graph convolutional networks, journal=IEEE Transactions on Neural Networks , url=https://ieeexplore.ieee.org/document/4700287 , volume=5, issue=1, pages=61–80 , doi=10.1109/TNN.2008.2005605, pmid=19068426, arxiv=1609.02907, s2cid=206756462 {{Cite journal, last1=Hamilton, first1=William, last2=Ying, first2=Rex , last3=Leskovec, first3=Jure, date=2017, title=Inductive Representation Learning on Large Graphs, url=https://cs.stanford.edu/people/jure/pubs/graphsage-nips17.pdf, journal=Neural Information Processing Systems, volume=31, arxiv=1706.02216, via=Stanford {{Cite arXiv, last1=Veličković, first1=Petar, last2=Cucurull, first2=Guillem , last3=Casanova, first3=Arantxa, last4=Romero, first4=Adriana, last5=Liò, first5=Pietro, last6=Bengio , first6=Yoshua, date=2018-02-04 , title=Graph Attention Networks, eprint=1710.10903 , class=stat.ML {{Cite web, title=Stanford Large Network Dataset Collection , url=https://snap.stanford.edu/data/, access-date=2021-07-05, website=snap.stanford.edu {{cite journal , last1=Li , first1=Zhuwen , last2=Chen , first2=Qifeng , last3=Koltun , first3=Vladlen , title=Combinatorial optimization with graph convolutional networks and guided tree search , journal=Neural Information Processing Systems , date=2018 , volume=31 , pages=537–546 , arxiv=1810.10659 {{cite arXiv , last1=Bronstein , first1=Michael M. , last2=Bruna , first2=Joan , last3=Cohen , first3=Taco , last4=Veličković , first4=Petar , title=Geometric Deep Learning: Grids, Groups, Graphs Geodesics and Gauges , date=May 4, 2021 , class=cs.LG , eprint=2104.13478 {{cite arXiv, last=Douglas, first=B. L., date=2011-01-27, title=The Weisfeiler–Lehman Method and Graph Isomorphism Testing, class=math.CO, eprint=1101.5211 {{Cite arXiv, last1=Xu, first1=Keyulu, last2=Hu, first2=Weihua, last3=Leskovec, first3=Jure , last4=Jegelka, first4=Stefanie, date=2019-02-22, title=How Powerful are Graph Neural Networks? , eprint=1810.00826 , class=cs.LG {{cite arXiv , last1=Veličković , first1=Petar , title=Message passing all the way up , year=2022 , class=cs.LG , eprint=2202.11097 {{cite journal , last1=Qasim , first1=Shah Rukh , last2=Kieseler , first2=Jan , last3=Iiyama , first3=Yutaro , last4=Pierini , first4=Maurizio Pierini , title=Learning representations of irregular particle-detector geometry with distance-weighted graph networks , journal=The European Physical Journal C , date=2019 , volume=79 , issue=7 , doi=10.1140/epjc/s10052-019-7113-9, s2cid=88518244 , doi-access=free , arxiv=1902.07987 {{cite arXiv , last1=Liu , first1=Chuang , last2=Zhan , first2=Yibing , last3=Li , first3=Chang , last4=Du , first4=Bo , last5=Wu , first5=Jia , last6=Hu , first6=Wenbin , last7=Liu , first7=Tongliang , last8=Tao , first8=Dacheng , title=Graph Pooling for Graph Neural Networks: Progress, Challenges, and Opportunities , date=2022 , class=cs.LG , eprint=2204.07321 {{cite arXiv , last1=Bodnar , first1=Christian , last2=Frasca , first2=Fabrizio , last3=Guang Wang , first3=Yu , last4=Otter , first4=Nina , last5=Montúfar , first5=Guido , last6=Liò , first6=Pietro , last7=Bronstein , first7=Michael , title=Weisfeiler and Lehman Go Topological: Message Passing Simplicial Networks , date=2021 , class=cs.LG , eprint=2103.03212 {{cite arXiv , last1=Gao , first1=Hongyang , last2=Ji , first2=Shuiwang Ji , title=Graph U-Nets , date=2019 , class=cs.LG , eprint=1905.05178 {{cite arXiv , last1=Lee , first1=Junhyun , last2=Lee , first2=Inyeop , last3=Kang , first3=Jaewoo , title=Self-Attention Graph Pooling , date=2019 , class=cs.LG , eprint=1904.08082 {{cite arXiv , last1=Alon , first1=Uri , last2=Yahav , first2=Eran , title=On the Bottleneck of Graph Neural Networks and its Practical Implications , date=2021 , class=cs.LG , eprint=2006.05205 {{cite journal , last1=Chen , first1=Deli , last2=Lin , first2=Yankai , last3=Li , first3=Wei , last4=Li , first4=Peng , last5=Zhou , first5=Jie , last6=Sun , first6=Xu , title=Measuring and Relieving the Over-smoothing Problem for Graph Neural Networks from the Topological View , journal=Proceedings of the AAAI Conference on Artificial Intelligence , date=2020 , volume=34 , issue=4 , pages=3438–3445 , doi=10.1609/aaai.v34i04.5747 , s2cid=202539008 , arxiv=1909.03211 {{cite news, last1=Sample, first1=Ian, date=2 December 2018, title=Google's DeepMind predicts 3D shapes of proteins, work=The Guardian, url=https://www.theguardian.com/science/2018/dec/02/google-deepminds-ai-program-alphafold-predicts-3d-shapes-of-proteins, access-date=30 November 2020 {{cite web , title=DeepMind's protein-folding AI has solved a 50-year-old grand challenge of biology , url=https://www.technologyreview.com/2020/11/30/1012712/deepmind-protein-folding-ai-solved-biology-science-drugs-disease/ , website=MIT Technology Review , access-date=30 November 2020 , language=en {{cite book , last1=Fan , first1=Wenqi , last2=Ma , first2=Yao , last3=Li , first3=Qing , last4=He , first4=Yuan , last5=Zhao , first5=Eric , last6=Tang , first6=Jiliang , last7=Yin , first7=Dawei , title=Graph Neural Networks for Social Recommendation , date=2019 , pages=417–426 , doi=10.1145/3308558.3313488 , hdl=10397/81232 , isbn=9781450366748 , s2cid=67769538 , arxiv=1902.07243 {{cite book , last1=Ying , first1=Rex , last2=He , first2=Ruining , last3=Chen , first3=Kaifeng , last4=Eksombatchai , first4=Pong , last5=Hamilton , first5=William L. , last6=Leskovec , first6=Jure , title=Graph Convolutional Neural Networks for Web-Scale Recommender Systems , date=2018 , pages=974–983 , doi=10.1145/3219819.3219890 , isbn=9781450355520 , s2cid=46949657 , arxiv=1806.01973 {{cite arXiv , last1=Gasse , first1=Maxime , last2=Chételat , first2=Didier , last3=Ferroni , first3=Nicola , last4=Charlin , first4=Laurent , last5=Lodi , first5=Andrea , title=Exact Combinatorial Optimization with Graph Convolutional Neural Networks , date=2019 , class=cs.LG , eprint=1906.01629 {{cite arXiv , last1=Cappart , first1=Quentin , last2=Chételat , first2=Didier , last3=Khalil , first3=Elias , last4=Lodi , first4=Andrea , last5=Morris , first5=Christopher , last6=Veličković , first6=Petar , title=Combinatorial optimization and reasoning with graph neural networks , year=2021 , class=cs.LG , eprint=2102.09544 {{cite journal , last1=Mirhoseini , first1=Azalia , last2=Goldie , first2=Anna , last3=Yazgan , first3=Mustafa , last4=Jiang , first4=Joe Wenjie , last5=Songhori , first5=Ebrahim , last6=Wang , first6=Shen , last7=Lee , first7=Young-Joon , last8=Johnson , first8=Eric , last9=Pathak , first9=Omkar , last10=Nazi , first10=Azade , last11=Pak , first11=Jiwoo , last12=Tong , first12=Andy , last13=Srinivasa , first13=Kavya , last14=Hang , first14=William , last15=Tuncer , first15=Emre , last16=Le , first16=Quoc V. , last17=Laudon , first17=James , last18=Ho , first18=Richard , last19=Carpenter , first19=Roger , last20=Dean , first20=Jeff , title=A graph placement methodology for fast chip design , journal=Nature , date=2021 , volume=594 , issue=7862 , pages=207–212 , doi=10.1038/s41586-021-03544-w, pmid=34108699 , s2cid=235395490 {{cite arXiv , last1=Matthias , first1=Fey , last2=Lenssen , first2=Jan E. , title=Fast Graph Representation Learning with PyTorch Geometric , date=2019 , class=cs.LG , eprint=1903.02428 {{cite web , title=Tensorflow GNN , website=
GitHub GitHub, Inc. () is an Internet hosting service for software development and version control using Git. It provides the distributed version control of Git plus access control, bug tracking, software feature requests, task management, co ...
, url=https://github.com/tensorflow/gnn , access-date=30 June 2022
{{cite web , title=jraph , website=
GitHub GitHub, Inc. () is an Internet hosting service for software development and version control using Git. It provides the distributed version control of Git plus access control, bug tracking, software feature requests, task management, co ...
, url=https://github.com/deepmind/jraph , access-date=30 June 2022
{{cite arXiv , last1=Xu , first1=Keyulu , last2=Zhang , first2=Mozhi , last3=Jegelka , first3=Stephanie , last4=Kawaguchi , first4=Kenji , title=Optimization of Graph Neural Networks: Implicit Acceleration by Skip Connections and More Depth , date=2021 , class=cs.LG , eprint=2105.04550 {{cite arXiv , last1=Li , first1=Yujia , last2=Tarlow , first2=Daniel , last3=Brockschmidt , first3=Mark , last4=Zemel , first4=Richard , title=Gated Graph Sequence Neural Networks , date=2016 , class=cs.LG , eprint=1511.05493 {{cite book , last1=Grady , first1=Leo , last2=Polimeni , first2=Jonathan , title=Discrete Calculus: Applied Analysis on Graphs for Computational Science , url=http://leogrady.net/wp-content/uploads/2017/01/grady2010discrete.pdf , date=2011 , publisher=Springer {{cite arXiv , last1=Xu , first1=Keyulu , last2=Li , first2=Chengtao , last3=Tian , first3=Yonglong , last4=Sonobe , first4=Tomohiro , last5=Kawarabayashi , first5=Ken-ichi , last6=Jegelka , first6=Stefanie , title=Representation Learning on Graphs with Jumping Knowledge Networks , date=2018 , class=cs.LG , eprint=1806.03536 {{cite web , last=Lucibello , first=Carlo , title=GraphNeuralNetworks.jl , url=https://github.com/CarloLucibello/GraphNeuralNetworks.jl , year=2021 , access-date=2023-09-21


External links

* https://distill.pub/2021/gnn-intro/ Semisupervised learning Supervised learning Unsupervised learning Artificial neural networks Graph algorithms