Bridging the Gap between Spatial and Spectral Domains: A Unified Framework for Graph Neural Networks
dc.contributor.author | Chen, Zhiqian | en |
dc.contributor.author | Chen, Fanglan | en |
dc.contributor.author | Zhang, Lei | en |
dc.contributor.author | Ji, Taoran | en |
dc.contributor.author | Fu, Kaiqun | en |
dc.contributor.author | Zhao, Liang | en |
dc.contributor.author | Chen, Feng | en |
dc.contributor.author | Wu, Lingfei | en |
dc.contributor.author | Aggarwal, Charu | en |
dc.contributor.author | Lu, Chang-Tien | en |
dc.date.accessioned | 2023-11-02T13:02:42Z | en |
dc.date.available | 2023-11-02T13:02:42Z | en |
dc.date.issued | 2023-10 | en |
dc.date.updated | 2023-11-01T08:00:34Z | en |
dc.description.abstract | Deep learning's performance has been extensively recognized recently. Graph neural networks (GNNs) are designed to deal with graph-structural data that classical deep learning does not easily manage. Since most GNNs were created using distinct theories, direct comparisons are impossible. Prior research has primarily concentrated on categorizing existing models, with little attention paid to their intrinsic connections. The purpose of this study is to establish a unified framework that integrates GNNs based on spectral graph and approximation theory. The framework incorporates a strong integration between spatial- and spectral-based GNNs while tightly associating approaches that exist within each respective domain. | en |
dc.description.version | Accepted version | en |
dc.format.mimetype | application/pdf | en |
dc.identifier.doi | https://doi.org/10.1145/3627816 | en |
dc.identifier.uri | http://hdl.handle.net/10919/116587 | en |
dc.language.iso | en | en |
dc.publisher | ACM | en |
dc.rights | In Copyright | en |
dc.rights.holder | The author(s) | en |
dc.rights.uri | http://rightsstatements.org/vocab/InC/1.0/ | en |
dc.title | Bridging the Gap between Spatial and Spectral Domains: A Unified Framework for Graph Neural Networks | en |
dc.type | Article - Refereed | en |
dc.type.dcmitype | Text | en |