ACL-OCL / Base_JSON /prefixD /json /dlg4nlp /2022.dlg4nlp-1.3.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2022",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T16:25:04.039151Z"
},
"title": "Continuous Temporal Graph Networks for Event-Based Graph Data",
"authors": [
{
"first": "Jin",
"middle": [],
"last": "Guo",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Xi'an Jiaotong University",
"location": {}
},
"email": ""
},
{
"first": "Zhen",
"middle": [],
"last": "Han",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Siemens AG",
"location": {}
},
"email": ""
},
{
"first": "Zhou",
"middle": [],
"last": "Su",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Xi'an Jiaotong University",
"location": {}
},
"email": ""
},
{
"first": "Jiliang",
"middle": [],
"last": "Li",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Xi'an Jiaotong University",
"location": {}
},
"email": ""
},
{
"first": "Volker",
"middle": [],
"last": "Tresp",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Siemens AG",
"location": {}
},
"email": ""
},
{
"first": "Yuyi",
"middle": [],
"last": "Wang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Xi'an Jiaotong University",
"location": {}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "There has been an increasing interest in modeling continuous-time dynamics of temporal graph data. Previous methods encode timeevolving relational information into a lowdimensional representation by specifying discrete layers of neural networks, while realworld dynamic graphs often vary continuously over time. Hence, we propose Continuous Temporal Graph Networks (CTGNs) to capture continuous dynamics of temporal graph data. We use both the link starting timestamps and link duration as evolving information to model continuous dynamics of nodes. The key idea is to use neural ordinary differential equations (ODE) to characterize the continuous dynamics of node representations over dynamic graphs. We parameterize ordinary differential equations using a novel graph neural network. The existing dynamic graph networks can be considered as a specific discretization of CTGNs. Experiment results on both transductive and inductive tasks demonstrate the effectiveness of our proposed approach over competitive baselines.",
"pdf_parse": {
"paper_id": "2022",
"_pdf_hash": "",
"abstract": [
{
"text": "There has been an increasing interest in modeling continuous-time dynamics of temporal graph data. Previous methods encode timeevolving relational information into a lowdimensional representation by specifying discrete layers of neural networks, while realworld dynamic graphs often vary continuously over time. Hence, we propose Continuous Temporal Graph Networks (CTGNs) to capture continuous dynamics of temporal graph data. We use both the link starting timestamps and link duration as evolving information to model continuous dynamics of nodes. The key idea is to use neural ordinary differential equations (ODE) to characterize the continuous dynamics of node representations over dynamic graphs. We parameterize ordinary differential equations using a novel graph neural network. The existing dynamic graph networks can be considered as a specific discretization of CTGNs. Experiment results on both transductive and inductive tasks demonstrate the effectiveness of our proposed approach over competitive baselines.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Graph neural networks (GNNs) have attracted growing interest in the past few years due to their universal applicability in various fields, e.g., social networks (Fan et al., 2019) and natural language processing (Liu et al., 2021a) . Graph neural networks (GNNs) learn a lower-dimensional representation for a node in a vector space by aggregating the information from its neighbors using discrete hidden layers. Then the embedding can be used for downstream tasks such as node classification (Atwood and Towsley, 2015) , link prediction (Zhang and Chen, 2018; Li et al., 2020) , and knowledge completion (Liu et al., 2021b) .",
"cite_spans": [
{
"start": 161,
"end": 179,
"text": "(Fan et al., 2019)",
"ref_id": "BIBREF4"
},
{
"start": 212,
"end": 231,
"text": "(Liu et al., 2021a)",
"ref_id": "BIBREF11"
},
{
"start": 493,
"end": 519,
"text": "(Atwood and Towsley, 2015)",
"ref_id": "BIBREF0"
},
{
"start": 538,
"end": 560,
"text": "(Zhang and Chen, 2018;",
"ref_id": "BIBREF27"
},
{
"start": 561,
"end": 577,
"text": "Li et al., 2020)",
"ref_id": "BIBREF10"
},
{
"start": 605,
"end": 624,
"text": "(Liu et al., 2021b)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Most graph neural networks only accept static graphs as input, although real-life graphs of interactions, such as user-item interactions, often change Figure 1 : The importance of link duration. Consider the behavior of a user watching movies. There are two types of nodes in the graph: user nodes and item nodes. Given the user's historical behavior, the predicted target is (user 1 , don't_click, Movie_4). If we ignore the link duration information, user 1 seems interested in cartoon movies because he clicked on it at timestamp t 1 . But user 1 only watched the Movie_ 1 for 10s. The link duration indicated that although the user clicked, he was not interested. over time. Learning the node representation on dynamic graphs is a very challenging task. Dynamic graph methods can be divided into discrete-time dynamic graph (DTDG) models and continuous-time dynamic graph (CTDG) models. More recently, an increasing interest in CTDG-based graph representation learning algorithms can be observed (Xu et al., 2020; Trivedi et al., 2018; Kumar et al., 2019; Rossi et al., 2020; Wang et al., 2020b; Ding et al., 2021) .",
"cite_spans": [
{
"start": 1000,
"end": 1017,
"text": "(Xu et al., 2020;",
"ref_id": "BIBREF25"
},
{
"start": 1018,
"end": 1039,
"text": "Trivedi et al., 2018;",
"ref_id": "BIBREF18"
},
{
"start": 1040,
"end": 1059,
"text": "Kumar et al., 2019;",
"ref_id": "BIBREF9"
},
{
"start": 1060,
"end": 1079,
"text": "Rossi et al., 2020;",
"ref_id": "BIBREF15"
},
{
"start": 1080,
"end": 1099,
"text": "Wang et al., 2020b;",
"ref_id": "BIBREF21"
},
{
"start": 1100,
"end": 1118,
"text": "Ding et al., 2021)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [
{
"start": 151,
"end": 159,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Although the above continuous-time dynamic methods have achieved impressive results, they still have limitations. The majority of research (Rossi et al., 2020; Wang et al., 2020b; Xu et al., 2020; Trivedi et al., 2018; Kumar et al., 2019) pays attention to the contact sequence dynamic graphs, in which the links are permanent, and no link duration is provided (e.g., email networks and citation networks). However, most real-life networks are event-based dynamic graphs in which the interactions between source nodes and destination nodes are not permanent (e.g., employment networks and ",
"cite_spans": [
{
"start": 139,
"end": 159,
"text": "(Rossi et al., 2020;",
"ref_id": "BIBREF15"
},
{
"start": 160,
"end": 179,
"text": "Wang et al., 2020b;",
"ref_id": "BIBREF21"
},
{
"start": 180,
"end": 196,
"text": "Xu et al., 2020;",
"ref_id": "BIBREF25"
},
{
"start": 197,
"end": 218,
"text": "Trivedi et al., 2018;",
"ref_id": "BIBREF18"
},
{
"start": 219,
"end": 238,
"text": "Kumar et al., 2019)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2460 \u2461 \u2462 \u2463 \u2464 \u2465 1,4 1 ((2,5)| 2 ) \u2026 \u2026 ((1,4)| )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Memory",
"sec_num": null
},
{
"text": "Figure 2: Overview of our Continuous Temporal Graph network. proximity networks). The event-based dynamic graph includes the time at which the link appeared and the duration of the link. Link duration reflects the degree of association between the two nodes, e.g., user i browses item j for 2 seconds and k for 20 seconds. It means that the user's interest in the two items j, k is different. Ignoring the link duration information can reduce the link prediction ability and even result in questionable inference. Thus, it is crucial to consider the influence of link duration on node relationship prediction (Zhang and Chen, 2018; Li et al., 2020) and knowledge completion (Liu et al., 2021b) .",
"cite_spans": [
{
"start": 609,
"end": 631,
"text": "(Zhang and Chen, 2018;",
"ref_id": "BIBREF27"
},
{
"start": 632,
"end": 648,
"text": "Li et al., 2020)",
"ref_id": "BIBREF10"
},
{
"start": 674,
"end": 693,
"text": "(Liu et al., 2021b)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Memory",
"sec_num": null
},
{
"text": "The existing GNN-based methods (Weinan, 2017; Oono and Suzuki, 2019) that learn the node representation over dynamic graphs can be considered discrete dynamical systems. demonstrate that the continuous dynamical systems are more efficient for modeling continuous-time dynamic data. The discrete networks can roughly be regarded as continuous networks by stacking enough layers. However, Onno and Suzuki (2019) point out that graph neural networks (GNNs) exponentially lose expressive power for downstream tasks, which will lead to over-smoothing problems as we add more hidden layers. Therefore, designing effective continuous Graph Neural Networks to model continuous-time dynamics of node representation on dynamic graphs is critical. To this end, many continuous graph neural networks Xhonneux et al., 2019) have been proposed recently. Although those mentioned above continuous dynamic neural networks are more efficient to model the graph data, few approaches have been proposed for dealing with dynamic graphs using continuous-time dynamic neural networks.",
"cite_spans": [
{
"start": 31,
"end": 45,
"text": "(Weinan, 2017;",
"ref_id": "BIBREF22"
},
{
"start": 46,
"end": 68,
"text": "Oono and Suzuki, 2019)",
"ref_id": "BIBREF13"
},
{
"start": 788,
"end": 810,
"text": "Xhonneux et al., 2019)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Memory",
"sec_num": null
},
{
"text": "This paper proposes a general framework of continuous temporal graph networks (CTGNs) to model continuous-time representations for dy-namic graph-structured data. We combine Ordinary Differential Equation Systems (ODEs) and graphs methods. Instead of specifying discrete hidden layers, we integrate neural layers over continuous time. Figure 2 illustrates the workflow of the proposed CTGN method. There is an interaction between two nodes. First, a novel temporal graph network (TGN) is applied as the encoder to learn the latent states using the updated memory. Then, the neural ODE module is used to model the node's continuous-time representation. Considering that the link duration reflects the degree of association between the two nodes, we use the link duration as the integration variable to control the weights of different interactions. After that, we use the LSTM (Shi et al., 2015) as the decoder to compute the probability of interaction between the two given nodes. Finally, the memory is updated as the input of the encoder. Memory is a compressed representation of the historical behavior of all nodes defined in Section 3.1. Experimental results on five real-world datasets of link prediction demonstrate the effectiveness of the proposed method over the state-of-art baselines. The main contributions of this paper are:",
"cite_spans": [
{
"start": 876,
"end": 894,
"text": "(Shi et al., 2015)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [
{
"start": 335,
"end": 343,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Memory",
"sec_num": null
},
{
"text": "\u2022 We present a novel Continuous Temporal Graph Network (CTGN) inspired by the neural ODE method.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Memory",
"sec_num": null
},
{
"text": "\u2022 CTGNs pay attention to the event-based dynamic graph. CTGNs update the node's representation with both the valid discrete timestamps when the link appears and the link duration between two linked nodes as evolving information.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Memory",
"sec_num": null
},
{
"text": "\u2022 We show that our model can outperform existing state-of-the-art methods on both transductive and inductive tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Memory",
"sec_num": null
},
{
"text": "The existing dynamic graph representation learning methods can be divided into two categories, discrete-time dynamic graphs and continuous-time dynamic graphs. Discrete-time dynamic graphs (DTDGs) are a sequence of snapshots at different time intervals.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background 2.1 Dynamic Graph Methods",
"sec_num": "2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "DG = {G 1 , G 2 , ..., G T } ,",
"eq_num": "(1)"
}
],
"section": "Background 2.1 Dynamic Graph Methods",
"sec_num": "2"
},
{
"text": "where T is the number of snapshots. Current dynamic graph methods (Wang et al., 2020a; Trivedi et al., 2017; Xiong et al., 2019) have been mostly designed for discrete-time dynamic graphs (DT-DGs).",
"cite_spans": [
{
"start": 66,
"end": 86,
"text": "(Wang et al., 2020a;",
"ref_id": "BIBREF20"
},
{
"start": 87,
"end": 108,
"text": "Trivedi et al., 2017;",
"ref_id": "BIBREF17"
},
{
"start": 109,
"end": 128,
"text": "Xiong et al., 2019)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background 2.1 Dynamic Graph Methods",
"sec_num": "2"
},
{
"text": "Continuous-time dynamic graphs (CTDGs) can be viewed as a set of observations/events (Kazemi et al., 2019) , and the network evolution information is retained. There are only a few works on CTDG. But recently, more attention has been paid to continuous-time graphs. All three representations of CTDG are described in more detail below.",
"cite_spans": [
{
"start": 85,
"end": 106,
"text": "(Kazemi et al., 2019)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background 2.1 Dynamic Graph Methods",
"sec_num": "2"
},
{
"text": "1. The contact sequence dynamic graph is the simplest representation form of CTDG.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background 2.1 Dynamic Graph Methods",
"sec_num": "2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "CS = (u i , v i , t i ) ,",
"eq_num": "(2)"
}
],
"section": "Background 2.1 Dynamic Graph Methods",
"sec_num": "2"
},
{
"text": "where u is the source node, v is the destination node, and t is the timestamp when the link appears. In the contact sequence dynamic graph, the link is permanent (e.g., citation networks) or instantaneous (e.g., email networks). Therefore, this graph has no link duration.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background 2.1 Dynamic Graph Methods",
"sec_num": "2"
},
{
"text": "There has been a lot of research on contact sequence dynamic graphs. Trivedi et al. (2018) learn the representation of node i by aggregating the node destination's neighborhood information and updating the embedding for the node using a recurrent architecture after an interaction involving node i. Kumar et al. (2019) employ two recurrent neural networks to update the embedding of a user and an item at every interaction. TGAT (Xu et al., 2020) proposes a novel functional time encoding method and uses self-attention to inductive representation learning on temporal graphs. Wang et al. (2020b) propose the asynchronous propagation attention network (APAN) for real-time temporal graph embedding.",
"cite_spans": [
{
"start": 69,
"end": 90,
"text": "Trivedi et al. (2018)",
"ref_id": "BIBREF18"
},
{
"start": 299,
"end": 318,
"text": "Kumar et al. (2019)",
"ref_id": "BIBREF9"
},
{
"start": 429,
"end": 446,
"text": "(Xu et al., 2020)",
"ref_id": "BIBREF25"
},
{
"start": 577,
"end": 596,
"text": "Wang et al. (2020b)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background 2.1 Dynamic Graph Methods",
"sec_num": "2"
},
{
"text": "2. The event-based dynamic graph consists of the node pairs (u, v) , the edge appears timestamp t and the link duration \u2206t . Link duration indicates how long the edge lasts until it disappears.",
"cite_spans": [
{
"start": 60,
"end": 66,
"text": "(u, v)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background 2.1 Dynamic Graph Methods",
"sec_num": "2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "EB = (u i , v i , t i , \u2206t i ) .",
"eq_num": "(3)"
}
],
"section": "Background 2.1 Dynamic Graph Methods",
"sec_num": "2"
},
{
"text": "Rossi et al. 2020proposes a generic inductive framework operating on contact sequence dynamic graphs by adding a memory module on TGAT (Xu et al., 2020) . TGN can also operate on the event-based dynamic graph by simply replacing the timestamp t with link duration \u2206t in the memory module.",
"cite_spans": [
{
"start": 135,
"end": 152,
"text": "(Xu et al., 2020)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background 2.1 Dynamic Graph Methods",
"sec_num": "2"
},
{
"text": "3. The streams graph can be viewed as a particular case of the event-based dynamic graph. The streams graph includes the edge label \u03b4, which indicates edge removal or edge addition.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background 2.1 Dynamic Graph Methods",
"sec_num": "2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "GS = (u i , v i , t i , \u03b4 i ), \u03b4 i \u2208 [\u22121, 1] .",
"eq_num": "(4)"
}
],
"section": "Background 2.1 Dynamic Graph Methods",
"sec_num": "2"
},
{
"text": "TGN (Rossi et al., 2020) converts the streams graph into an event-based graph for processing. According to the edge label, the event can be reorganized as (u i , v i , t \u2032 , t), which was created at time t \u2032 and deleted at time t, then two messages can be computed for the source and target nodes.",
"cite_spans": [
{
"start": 4,
"end": 24,
"text": "(Rossi et al., 2020)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background 2.1 Dynamic Graph Methods",
"sec_num": "2"
},
{
"text": "The existing CTDG methods model discrete dynamics representations of continuous-time graph data with multiple discrete propagation layers. Our proposed method focuses on the event-based temporal graph and updates the node's representation with both the timestamps and the link duration between the two nodes. CTGN also supports contact sequence dynamic graph. The model details will be slightly different from event-based dynamic graph. We will clarify this point in Chapter 3.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background 2.1 Dynamic Graph Methods",
"sec_num": "2"
},
{
"text": "Continuous-time dynamical systems mean that the system's behavior changes with time development in the continuous-time domain. There have been related works that view data as a continuous object in artificial intelligence, e.g., pictures (Chen et al., 2018) and static graphs (Xhonneux et al., 2019; Poli et al., 2019) . The continuous-time dynamic graph (CTDG) we introduced in Section 2.1 is also a continuous-time dynamical system in which nodes' state changes over time. Therefore, it is necessary to model the continuous dynamical system of CTDG data. To the best of our knowledge, our CTGN is the first approach that learn continuous-time dynamics on CTDG.",
"cite_spans": [
{
"start": 276,
"end": 299,
"text": "(Xhonneux et al., 2019;",
"ref_id": "BIBREF23"
},
{
"start": 300,
"end": 318,
"text": "Poli et al., 2019)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Continuous-time Dynamical Systems",
"sec_num": "2.2"
},
{
"text": "Considering a residual network:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Ordinary Differential Equations and Continuous Graph Neural Networks",
"sec_num": "2.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "h t+1 = h t + f (h t , \u03b8 t )",
"eq_num": "(5)"
}
],
"section": "Neural Ordinary Differential Equations and Continuous Graph Neural Networks",
"sec_num": "2.3"
},
{
"text": "A theoretical method to improve the performance of discrete networks is to stack more neural layers and take smaller steps (Chen et al., 2018). However, this scheme is not feasible because of the limited computer resources and over-fitting problems. Oono and Suzuki (2019) point out that Graph Neural Networks (GNNs) exponentially lose expressive power for downstream tasks when adding more hidden layers because of over-smoothness problems.",
"cite_spans": [
{
"start": 250,
"end": 272,
"text": "Oono and Suzuki (2019)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Ordinary Differential Equations and Continuous Graph Neural Networks",
"sec_num": "2.3"
},
{
"text": "Inspired by residual network and ordinary difference, neural ordinary difference is proposed to solve this problem. Neural ODE models continuous-time dynamical systems by parameterizing the hidden state's derivative using a neural network.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Ordinary Differential Equations and Continuous Graph Neural Networks",
"sec_num": "2.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "d z d t = f (z, t), z(0) = x,",
"eq_num": "(6)"
}
],
"section": "Neural Ordinary Differential Equations and Continuous Graph Neural Networks",
"sec_num": "2.3"
},
{
"text": "NeuralODE can be regarded as a discrete network with an infinitesimal learning rate and infinite layers. Weinanl (2017) proposes the idea of using continuous dynamical systems to model hidden layers. Chen et al. (2018) introduce neural ODE, a continuous-depth model by parameterization the derivative of the hidden state using a neural network. Neural ODE only focuses on unstructured data. Xhonneux et al. (2019) apply continuous dynamical methods to static graph-structured data. They propose Continuous Graph Neural Networks (CGNNs), which solve the over-smoothing caused by stacking more layers and improve the performance of GNNs. Zang and Wang (2019) learn continuous-time dynamics on complex networks. However, continuous graph neural networks (CGNN) can only deal with static data.",
"cite_spans": [
{
"start": 636,
"end": 656,
"text": "Zang and Wang (2019)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Ordinary Differential Equations and Continuous Graph Neural Networks",
"sec_num": "2.3"
},
{
"text": "In this section, we introduce our proposed approach. The key idea of the CTGN is to build continuous-time hidden layers which can learn continuous informative node representations over event-based dynamic graphs. To characterize the continuous dynamics of node representation, we use ordinary differential equations (ODEs) parameterized by a neural network, which is a continuous function of time. We study both transductive and inductive settings. In the transductive task, we predict future links of the nodes observed during the training phase. In the inductive tasks, we predict future links of the nodes never seen before. We first employ a temporal graph attention layer (Xu et al., 2020) to project each node into a latent space based on its features and neighbors. And then, an ODE module is designed to define the continuous dynamics on the node's latent representation h i (t).",
"cite_spans": [
{
"start": 677,
"end": 694,
"text": "(Xu et al., 2020)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Proposed Method: CTGN",
"sec_num": "3"
},
{
"text": "Memory Passing. Memory s i (t) is used to record the historical information of each node i the model has seen so far. It is a compressed representation of the historical behavior of all nodes. Memory s i (t) is updated when there is an interaction involving node i. At the end of each batch, we firstly compute memory s i (t) using the last time message m i (t \u2212 ) and memory s i (t \u2212 ):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Temporal Graph Network",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "s i (t) = mem(m i (t), s i (t \u2212 )) .",
"eq_num": "(7)"
}
],
"section": "Temporal Graph Network",
"sec_num": "3.1"
},
{
"text": "Here, mem(\u2022) is a learnable memory update function. In all experiments, we choose the memory function as GRU. s i (0) is initialized as a zero vector. At the end of each batch, the message m i (t) for the node can be updated to compute i's memory:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Temporal Graph Network",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "m i (t) = msg s (s i (t \u2212 )||s j (t \u2212 )||\u2206t||e ij (t)) , m j (t) = msg s (s j (t \u2212 )||s i (t \u2212 )||\u2206t||e ij (t)) .",
"eq_num": "(8)"
}
],
"section": "Temporal Graph Network",
"sec_num": "3.1"
},
{
"text": "Here || is the concatenation operator, \u2206t is the link duration between node i and j, . In the contact sequence dynamic graph, the link duration property is not available. We use (t \u2212 t \u2212 ) as \u2206t. There may be multiple events e i1 (t 1 ), . . . , e iN (t N ) involving the same node i in the same batch. In the experiment, we only use the latest interaction e iN (t N ) to compute i's message. msg(\u2022) is a learnable function, and we use an RNN network in our experiment: Multi-head Attention. Given an observed event p = (i, j, t, \u2206t), we can compute the node latent representation respectively for i and j using: ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Temporal Graph Network",
"sec_num": "3.1"
},
{
"text": "H (l) (t) = Attn (l) (Q (l) (t), K (l) (t), V (l) (t)) , (9)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Temporal Graph Network",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "Attn(Q, K, V) = sof tmax( QK T \u221a d k )V ,",
"eq_num": "(10)"
}
],
"section": "Temporal Graph Network",
"sec_num": "3.1"
},
{
"text": "where Q , K , V denote the 'querys ', 'keys', 'values', respectively .",
"cite_spans": [
{
"start": 35,
"end": 68,
"text": "', 'keys', 'values', respectively",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Temporal Graph Network",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "H (l) = [h (l) 1 , ..., h",
"eq_num": "(l)"
}
],
"section": "Temporal Graph Network",
"sec_num": "3.1"
},
{
"text": "i ] are the embedding of the graph nodes of l-th layers. The multi-head attention layer compute the node i's representation by aggregating it's N-hop neighbors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Temporal Graph Network",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "Q (l) (t) = (H (l\u22121) (t) || \u03d5(0))W Q ,",
"eq_num": "(11)"
}
],
"section": "Temporal Graph Network",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "K (l) (t) = C (l) (t)W K ,",
"eq_num": "(12)"
}
],
"section": "Temporal Graph Network",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "V (l) (t) = C (l) (t)W V ,",
"eq_num": "(13)"
}
],
"section": "Temporal Graph Network",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "C (l) (t) = [H (l\u22121) 1 (t) || E 1 (t 1 ) || \u03d5(t \u2212 t 1 ), . . . , H (l\u22121) N (t) || E N (t N ) || \u03d5(t \u2212 t N )] .",
"eq_num": "(14)"
}
],
"section": "Temporal Graph Network",
"sec_num": "3.1"
},
{
"text": "Here \u03d5(\u2022) represents a generic time encoder (Xu et al., 2020) . W Q , W K , W V \u2208 R d k \u00d7d k are the projection matrices used to generate attention embedding. We define keys and values as the neighbor information. h ",
"cite_spans": [
{
"start": 44,
"end": 61,
"text": "(Xu et al., 2020)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Temporal Graph Network",
"sec_num": "3.1"
},
{
"text": "i (t) = s i (t) + v i , s i (t)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Temporal Graph Network",
"sec_num": "3.1"
},
{
"text": "is node i's memory which saves the history information for the node. E n (t) = [e 1n (t), ..., e in (t)], e in (t) is edge features between node i and it's n-hop neighbor at time t. Temporal graph network is a discrete method that can be thought of as a discretization of the continuous dynamical systems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Temporal Graph Network",
"sec_num": "3.1"
},
{
"text": "In order to characterize the continuous dynamics of node representations, instead of only specifying a discrete sequence of hidden layers, we parameterize the hidden layers using ordinary differential equations (ODEs), a continuous function of time.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Continuous Dynamics of Node Representation",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "d z d t = f (z, t), z(0) = x.",
"eq_num": "(15)"
}
],
"section": "Model Continuous Dynamics of Node Representation",
"sec_num": "3.2"
},
{
"text": "Here, x is an initial vector, f is a learnable function, t is a time interval and z is a vector.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Continuous Dynamics of Node Representation",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "z(t) = z(0) + t 0 (f (t, z))d\u03c4.",
"eq_num": "(16)"
}
],
"section": "Model Continuous Dynamics of Node Representation",
"sec_num": "3.2"
},
{
"text": "We can compute the node's continuous-time dynamics representation by Equation 16 at arbitrary time t > 0. Previous work (Zang and Wang, 2019; Poli et al., 2019) model continuous-time dynamics for data by setting integration variable [0, t] as a hyperparameter. Considering the influence of link duration on the interaction between two nodes, we choose the link duration as the integration variable, in our experiment t = dur.",
"cite_spans": [
{
"start": 120,
"end": 141,
"text": "(Zang and Wang, 2019;",
"ref_id": "BIBREF26"
},
{
"start": 142,
"end": 160,
"text": "Poli et al., 2019)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model Continuous Dynamics of Node Representation",
"sec_num": "3.2"
},
{
"text": "Link duration shows how long it was (in seconds) until that user terminated browsing. Link duration can reflect the user's interest in different items. Take link duration as an integer variable that can control the weights of different interactions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Continuous Dynamics of Node Representation",
"sec_num": "3.2"
},
{
"text": "We parameterize the derivative of the hidden state using a neural network that takes the latent state, computed by the temporal graph network mentioned in Section 3.1 as input.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Continuous Dynamics of Node Representation",
"sec_num": "3.2"
},
{
"text": "z i (t) = ODESolver(f (t, z), h i (t), \u2206t i ). (17)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Continuous Dynamics of Node Representation",
"sec_num": "3.2"
},
{
"text": "Here, h i (t) is a discrete latent state computed by temporal graph networks, \u2206t i is the link duration between source node i and destination j. f (t, z) is ODE function, we choose f (t, z) as MLP. A black-box ODE solver computes the final node continuous dynamics embedding z i (t). We utilize the torchdiffeq.odeint_adjoint PyTorch package to solve reverse-time ODE and backpropagate.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Continuous Dynamics of Node Representation",
"sec_num": "3.2"
},
{
"text": "The time-encoding method (Xu et al., 2020) used in this paper is an effective method to map timestamp t from the time domain to d-dim vector space.",
"cite_spans": [
{
"start": 25,
"end": 42,
"text": "(Xu et al., 2020)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Time Smoothness",
"sec_num": "3.3"
},
{
"text": "However, the learning process of each timestamp is independent of other timestamps. Independent learning of hyperplanes of adjacent time intervals may cause adjacent times to be farther apart in embedded space. Actually, adjacent states in the graph should be more similar. To avoid the problem mentioned above, we constrained the variation between hyperplanes at adjacent timestamps by minimizing the euclidean distance:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Time Smoothness",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "L smooth (W ) = T \u22121 t=1 ||w t+1 \u2212 w t || 2 .",
"eq_num": "(18)"
}
],
"section": "Time Smoothness",
"sec_num": "3.3"
},
{
"text": "We use the link prediction loss function for training CTGN:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Learning",
"sec_num": "3.4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "loss = \u03b1L smooth (W ) + L task ,",
"eq_num": "(19)"
}
],
"section": "Model Learning",
"sec_num": "3.4"
},
{
"text": "where \u03b1 is a tradeoff parameter, l task is a loss function defined as the cross-entropy of the prediction and the ground truth. Our experiment found a parameter \u03b1 of 0.002 for contact sequence dynamic graphs and 0.7 for event-based dynamic graphs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Learning",
"sec_num": "3.4"
},
{
"text": "In this section, we first introduce datasets, baselines and parameter settings. Then we compare our proposed method with other strong baselines and competing approaches for both the inductive and transductive tasks for two benchmarks contact sequence dynamic graph datasets and three eventbased dynamic graph datasets. We study both transductive and inductive tasks. For event-based dynamic graphs, we learn link prediction tasks. For contact-sequence dynamic graphs, we learn dynamic node classification and link prediction tasks. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment and Analysis",
"sec_num": "4"
},
{
"text": "We use five real-world datasets in our experiments, three event-based dynamic graphs: Netflix 1 , Mooc (Feng et al., 2019) and Lastfm (Cantador et al., 2011) , two contact sequence dynamic graphs: Wikipedia (Kumar et al., 2019) , Reddit (Kumar et al., 2019) . The statistics of the datasets used in our experiments are described in detail in Table 1 .",
"cite_spans": [
{
"start": 103,
"end": 122,
"text": "(Feng et al., 2019)",
"ref_id": "BIBREF5"
},
{
"start": 134,
"end": 157,
"text": "(Cantador et al., 2011)",
"ref_id": "BIBREF1"
},
{
"start": 207,
"end": 227,
"text": "(Kumar et al., 2019)",
"ref_id": "BIBREF9"
},
{
"start": 237,
"end": 257,
"text": "(Kumar et al., 2019)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [
{
"start": 342,
"end": 349,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Datasets",
"sec_num": "4.1"
},
{
"text": "We compare our model with four CTDG methods: Jodie (Kumar et al., 2019) , Dyrep (Trivedi et al., 2018) , TGAT (Xu et al., 2020) , TGN (Rossi et al., 2020) , APAN (Wang et al., 2020b) . And we also include four DTDG methods: GAE (Kipf and Welling, 2016), VGAE (Kipf and Welling, 2016), GAT (Veli\u010dkovi\u0107 et al., 2018) , GraphSAGE (Hamilton et al., 2017) as well as two state-of-the-art static graph neural ODE methods: CGNN (Xhonneux et al., 2019) , NDCN (Zang and Wang, 2019) .",
"cite_spans": [
{
"start": 51,
"end": 71,
"text": "(Kumar et al., 2019)",
"ref_id": "BIBREF9"
},
{
"start": 80,
"end": 102,
"text": "(Trivedi et al., 2018)",
"ref_id": "BIBREF18"
},
{
"start": 110,
"end": 127,
"text": "(Xu et al., 2020)",
"ref_id": "BIBREF25"
},
{
"start": 134,
"end": 154,
"text": "(Rossi et al., 2020)",
"ref_id": "BIBREF15"
},
{
"start": 162,
"end": 182,
"text": "(Wang et al., 2020b)",
"ref_id": "BIBREF21"
},
{
"start": 289,
"end": 314,
"text": "(Veli\u010dkovi\u0107 et al., 2018)",
"ref_id": null
},
{
"start": 327,
"end": 350,
"text": "(Hamilton et al., 2017)",
"ref_id": "BIBREF6"
},
{
"start": 416,
"end": 444,
"text": "CGNN (Xhonneux et al., 2019)",
"ref_id": null
},
{
"start": 452,
"end": 473,
"text": "(Zang and Wang, 2019)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline",
"sec_num": "4.2"
},
{
"text": "We set the batch size to 200 for training and patience to 5 for early stopping in all experiments. The node embedding dimension is 172. During training, we used 0.0001 as the learning rate for contact sequence dynamic graph datasets (Wikipedia and Reddit) and 0.00009 for eventbased dynamic graph datasets (Netflix, Mooc, Lastfm). The weight of time smoothness loss \u03b1 is set to 0.002 on Wikipedia , Reddit and 0.7 on Netflix, Mooc, Lastfm. We choose the LSTM layer as the decoder for link prediction task and MLP for node classification task. We report mean and standard deviation across 10 runs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter Setup",
"sec_num": "4.3"
},
{
"text": "1 https://vodclickstream.com/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter Setup",
"sec_num": "4.3"
},
{
"text": "To demonstrate the effectiveness of our proposed method, we compare CTGN with competitive baselines on five real-world event-based graph datasets. Table 2 shows the results on link prediction tasks in both transductive and inductive settings for three event-based datasets. It is evident that our approach has achieved better results than the discrete dynamics graph neural networks on almost all datasets, especially in the inductive setting. Table 3 shows the dynamic node classification and link prediction results on two contact sequencedatasets. CTGN has a solid ability to embed dynamic graphs. The conclusion can be obtained from the Table 2 and Table 3 . Figure 3 shows ablation studies on the Netflix dataset for both the transductive and inductive setting of the link prediction task. As we can see from Figure 3 (a) and 3(b), our model is not sensitive to batch size. When the training batch size is 100, CTGN has the same average precision as TGN. With the continuous increase of batch size, the performance of CTGN is more stable.",
"cite_spans": [],
"ref_spans": [
{
"start": 147,
"end": 154,
"text": "Table 2",
"ref_id": null
},
{
"start": 444,
"end": 451,
"text": "Table 3",
"ref_id": null
},
{
"start": 641,
"end": 660,
"text": "Table 2 and Table 3",
"ref_id": null
},
{
"start": 663,
"end": 671,
"text": "Figure 3",
"ref_id": "FIGREF2"
},
{
"start": 814,
"end": 822,
"text": "Figure 3",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Result",
"sec_num": "4.4"
},
{
"text": "This paper introduces CTGN, a continuous temporal graph neural network for learning representation for event-based dynamic graphs. We build the connection between temporal graph networks and continuous dynamical systems inspired by neural ODE. Our framework allows the user to trade off speed for precision by selecting different learning rates and the weight of time smoothness loss parameters during training. We demonstrate on the link prediction task against competitive baselines that our model can outperform many existing stateof-the-art methods.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
}
],
"back_matter": [
{
"text": " Table 3 : Experiments on contact sequence datasets. ROC AUC (%) for the dynamic node classification task, Average Precision (%) for link prediction task. *Static method, \u2020Does not support inductive.",
"cite_spans": [],
"ref_spans": [
{
"start": 1,
"end": 8,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "NetFlix",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Searchconvolutional neural networks",
"authors": [
{
"first": "James",
"middle": [],
"last": "Atwood",
"suffix": ""
},
{
"first": "Don",
"middle": [],
"last": "Towsley",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "James Atwood and Don Towsley. 2015. Search- convolutional neural networks. CoRR, abs/1511.02136.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "2nd workshop on information heterogeneity and fusion in recommender systems",
"authors": [
{
"first": "Iv\u00e1n",
"middle": [],
"last": "Cantador",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Brusilovsky",
"suffix": ""
},
{
"first": "Tsvi",
"middle": [],
"last": "Kuflik",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 5th ACM conference on Recommender systems",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Iv\u00e1n Cantador, Peter Brusilovsky, and Tsvi Kuflik. 2011. 2nd workshop on information heterogeneity and fu- sion in recommender systems (hetrec 2011). In Proceedings of the 5th ACM conference on Recom- mender systems, RecSys 2011, New York, NY, USA. ACM.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Neural ordinary differential equations. CoRR",
"authors": [
{
"first": "Yulia",
"middle": [],
"last": "Tian Qi Chen",
"suffix": ""
},
{
"first": "Jesse",
"middle": [],
"last": "Rubanova",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Bettencourt",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Duvenaud",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tian Qi Chen, Yulia Rubanova, Jesse Bettencourt, and David Duvenaud. 2018. Neural ordinary differential equations. CoRR, abs/1806.07366.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Temporal knowledge graph forecasting with neural ODE. CoRR",
"authors": [
{
"first": "Zifeng",
"middle": [],
"last": "Ding",
"suffix": ""
},
{
"first": "Zhen",
"middle": [],
"last": "Han",
"suffix": ""
},
{
"first": "Yunpu",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Volker",
"middle": [],
"last": "Tresp",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zifeng Ding, Zhen Han, Yunpu Ma, and Volker Tresp. 2021. Temporal knowledge graph forecasting with neural ODE. CoRR, abs/2101.05151.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Graph neural networks for social recommendation",
"authors": [
{
"first": "Wenqi",
"middle": [],
"last": "Fan",
"suffix": ""
},
{
"first": "Yao",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Qing",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Yuan",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Yihong",
"middle": [
"Eric"
],
"last": "Zhao",
"suffix": ""
},
{
"first": "Jiliang",
"middle": [],
"last": "Tang",
"suffix": ""
},
{
"first": "Dawei",
"middle": [],
"last": "Yin",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wenqi Fan, Yao Ma, Qing Li, Yuan He, Yihong Eric Zhao, Jiliang Tang, and Dawei Yin. 2019. Graph neural networks for social recommendation. CoRR, abs/1902.07243.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Understanding dropouts in moocs",
"authors": [
{
"first": "W",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Tang",
"suffix": ""
},
{
"first": "T",
"middle": [
"X"
],
"last": "Liu",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the AAAI Conference on Artificial Intelligence",
"volume": "33",
"issue": "",
"pages": "517--524",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "W. Feng, J. Tang, and T. X. Liu. 2019. Understand- ing dropouts in moocs. Proceedings of the AAAI Conference on Artificial Intelligence, 33:517-524.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Inductive representation learning on large graphs",
"authors": [
{
"first": "William",
"middle": [
"L"
],
"last": "Hamilton",
"suffix": ""
},
{
"first": "Rex",
"middle": [],
"last": "Ying",
"suffix": ""
},
{
"first": "Jure",
"middle": [],
"last": "Leskovec",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "William L. Hamilton, Rex Ying, and Jure Leskovec. 2017. Inductive representation learning on large graphs. CoRR, abs/1706.02216.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Representation learning for dynamic graphs: A survey",
"authors": [
{
"first": "S",
"middle": [
"M"
],
"last": "Kazemi",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Goel",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Jain",
"suffix": ""
},
{
"first": "I",
"middle": [],
"last": "Kobyzev",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Sethi",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Forsyth",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Poupart",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. M. Kazemi, R. Goel, K. Jain, I. Kobyzev, A. Sethi, P. Forsyth, and P. Poupart. 2019. Representation learning for dynamic graphs: A survey.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Variational graph auto-encoders",
"authors": [
{
"first": "N",
"middle": [],
"last": "Thomas",
"suffix": ""
},
{
"first": "Max",
"middle": [],
"last": "Kipf",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Welling",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas N. Kipf and Max Welling. 2016. Variational graph auto-encoders.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Predicting dynamic embedding trajectory in temporal interaction networks",
"authors": [
{
"first": "Srijan",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "Xikun",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Jure",
"middle": [],
"last": "Leskovec",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery Data Mining",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1145/3292500.3330895"
]
},
"num": null,
"urls": [],
"raw_text": "Srijan Kumar, Xikun Zhang, and Jure Leskovec. 2019. Predicting dynamic embedding trajectory in temporal interaction networks. Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery Data Mining.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Typeaware anchor link prediction across heterogeneous networks based on graph attention network",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Shang",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Cao",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the AAAI Conference on Artificial Intelligence",
"volume": "34",
"issue": "",
"pages": "147--155",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "X Li, Y. Shang, Y. Cao, Y. Li, and Y. Liu. 2020. Type- aware anchor link prediction across heterogeneous networks based on graph attention network. Proceed- ings of the AAAI Conference on Artificial Intelligence, 34(1):147-155.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Retrieval-augmented generation for code summarization via hybrid GNN",
"authors": [
{
"first": "Shangqing",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Yu",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Xiaofei",
"middle": [],
"last": "Xie",
"suffix": ""
},
{
"first": "Jing",
"middle": [
"Kai"
],
"last": "Siow",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2021,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shangqing Liu, Yu Chen, Xiaofei Xie, Jing Kai Siow, and Yang Liu. 2021a. Retrieval-augmented gener- ation for code summarization via hybrid GNN. In International Conference on Learning Representa- tions.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Ragat: Relation aware graph attention network for knowledge graph completion",
"authors": [
{
"first": "Xiyang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Huobin",
"middle": [],
"last": "Tan",
"suffix": ""
},
{
"first": "Qinghong",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Guangyan",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2021,
"venue": "IEEE Access",
"volume": "9",
"issue": "",
"pages": "20840--20849",
"other_ids": {
"DOI": [
"10.1109/ACCESS.2021.3055529"
]
},
"num": null,
"urls": [],
"raw_text": "Xiyang Liu, Huobin Tan, Qinghong Chen, and Guangyan Lin. 2021b. Ragat: Relation aware graph attention network for knowledge graph completion. IEEE Access, 9:20840-20849.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Graph neural networks exponentially lose expressive power for node classification",
"authors": [
{
"first": "Kenta",
"middle": [],
"last": "Oono",
"suffix": ""
},
{
"first": "Taiji",
"middle": [],
"last": "Suzuki",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kenta Oono and Taiji Suzuki. 2019. Graph neural net- works exponentially lose expressive power for node classification.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Atsushi Yamashita, Hajime Asama, and Jinkyoo Park",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Poli",
"suffix": ""
},
{
"first": "Stefano",
"middle": [],
"last": "Massaroli",
"suffix": ""
},
{
"first": "Junyoung",
"middle": [],
"last": "Park",
"suffix": ""
}
],
"year": 2019,
"venue": "Graph neural ordinary differential equations. CoRR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Poli, Stefano Massaroli, Junyoung Park, At- sushi Yamashita, Hajime Asama, and Jinkyoo Park. 2019. Graph neural ordinary differential equations. CoRR, abs/1911.07532.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Temporal graph networks for deep learning on dynamic graphs",
"authors": [
{
"first": "E",
"middle": [],
"last": "Rossi",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Chamberlain",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Frasca",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Eynard",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Monti",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Bronstein",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "E. Rossi, B. Chamberlain, F. Frasca, D. Eynard, F. Monti, and M. Bronstein. 2020. Temporal graph networks for deep learning on dynamic graphs.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Convolutional LSTM network: A machine learning approach for precipitation nowcasting",
"authors": [
{
"first": "Xingjian",
"middle": [],
"last": "Shi",
"suffix": ""
},
{
"first": "Zhourong",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Hao",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Dit-Yan",
"middle": [],
"last": "Yeung",
"suffix": ""
},
{
"first": "Wai-Kin",
"middle": [],
"last": "Wong",
"suffix": ""
},
{
"first": "Wang-Chun",
"middle": [],
"last": "Woo",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xingjian Shi, Zhourong Chen, Hao Wang, Dit-Yan Ye- ung, Wai-Kin Wong, and Wang-chun Woo. 2015. Convolutional LSTM network: A machine learn- ing approach for precipitation nowcasting. CoRR, abs/1506.04214.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Know-evolve: Deep reasoning in temporal knowledge graphs",
"authors": [
{
"first": "R",
"middle": [],
"last": "Trivedi",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Farajtabar",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Dai",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. Trivedi, M. Farajtabar, Y. Wang, H. Dai, and S. Le. 2017. Know-evolve: Deep reasoning in temporal knowledge graphs.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Representation learning over dynamic graphs",
"authors": [
{
"first": "Rakshit",
"middle": [],
"last": "Trivedi",
"suffix": ""
},
{
"first": "Mehrdad",
"middle": [],
"last": "Farajtabar",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rakshit Trivedi, Mehrdad Farajtabar, Prasenjeet Biswal, and Hongyuan Zha. 2018. Representation learning over dynamic graphs. CoRR, abs/1803.04051.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Epne: Evolutionary pattern preserving network embedding",
"authors": [
{
"first": "J",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Jin",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Song",
"suffix": ""
},
{
"first": "X",
"middle": [],
"last": "Ma",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Wang, Y. Jin, G. Song, and X. Ma. 2020a. Epne: Evolutionary pattern preserving network embedding.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Apan: Asynchronous propagation attention network for realtime temporal graph embedding",
"authors": [
{
"first": "X",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Lyu",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Xia",
"suffix": ""
},
{
"first": "Q",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "X",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "X",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Cui",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "X. Wang, D. Lyu, M. Li, Y. Xia, Q. Yang, X. Wang, X. Wang, P. Cui, Y. Yang, and B. Sun. 2020b. Apan: Asynchronous propagation attention network for real- time temporal graph embedding.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "A proposal on machine learning via dynamical systems",
"authors": [
{
"first": "",
"middle": [],
"last": "Weinan",
"suffix": ""
}
],
"year": 2017,
"venue": "Communications in Mathematics amp; Statistics",
"volume": "5",
"issue": "1",
"pages": "1--11",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Weinan. 2017. A proposal on machine learning via dy- namical systems. Communications in Mathematics amp; Statistics, 5(1):1-11.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Continuous graph neural networks",
"authors": [
{
"first": "M",
"middle": [],
"last": "Lpac Xhonneux",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Qu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Tang",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lpac Xhonneux, M. Qu, and J Tang. 2019. Continuous graph neural networks.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "DynGraphGAN: Dynamic Graph Embedding via Generative Adversarial Networks",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Fu",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "P",
"middle": [
"S"
],
"last": "Yu",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Y. Xiong, Y. Zhang, H. Fu, W. Wang, and P. S. Yu. 2019. DynGraphGAN: Dynamic Graph Embedding via Generative Adversarial Networks. Grundlagen des MA-Gesch\u00e4ftes.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Inductive representation learning on temporal graphs",
"authors": [
{
"first": "D",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Ruan",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Korpeoglu",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Achan",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. Xu, C. Ruan, E. Korpeoglu, S. Kumar, and K. Achan. 2020. Inductive representation learning on temporal graphs.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Neural dynamics on complex networks",
"authors": [
{
"first": "Chengxi",
"middle": [],
"last": "Zang",
"suffix": ""
},
{
"first": "Fei",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chengxi Zang and Fei Wang. 2019. Neural dynamics on complex networks. CoRR, abs/1908.06491.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Link prediction based on graph neural networks",
"authors": [
{
"first": "Muhan",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Yixin",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Muhan Zhang and Yixin Chen. 2018. Link pre- diction based on graph neural networks. CoRR, abs/1802.09691.",
"links": null
}
},
"ref_entries": {
"FIGREF2": {
"uris": null,
"type_str": "figure",
"text": "Ablation studies on the Netflix dataset for both the transductive and inductive setting of the link prediction task. 3(a) Sensitivity study result of batch size in inductive setting. 3(b) Sensitivity study result of batch size in transductive setting. 3(c) The relationship between number of sampled neighbors and the model performance in inductive setting. 3(d) The relationship between number of sampled neighbors and the model performance in transductive setting.",
"num": null
},
"TABREF1": {
"html": null,
"content": "<table/>",
"num": null,
"type_str": "table",
"text": "Statistics of the datasets used in our experiments."
}
}
}
}