| 摘 要: |
Self-supervised graph-level representation learning has recently received considerable attention. Given varied input distributions, jointly learning graphs' unique and common features is vital to downstream tasks. Inspired by graph contrastive learning (GCL), which targets maximizing the agreement between graph representations from different views, we propose an Adaptive self-supervised framework, Ada-MIP, considering both Mutual Information between views (unique features) and inter-graph Proximity (common features). Specifically, Ada-MIP learns graphs' unique information through a learnable and probably injective augmenter, which can acquire more adaptive views compared to the augmentation strategies applied by existing GCL methods; to learn graphs' common information, we employ graph kernels to calculate graphs' proximity and learn graph representations among which the precomputed proximity is preserved. By sharing a global encoder, graphs' unique and common information can be well integrated into the graph representations learned by Ada-MIP. Ada-MIP is also extendable to semi-supervised scenarios, with our experiments confirming its superior performance in both unsupervised and semi-supervised tasks. |