Authors: Edwin Hancock1, ∗, Hangyuan Du2 and Lixin Cui3
JSA-Vol. 4 (2025),
1 Department of Computer Science, University of York, York, United Kingdom.
2 School of Computer and Information Technology, Shanxi University, Taiyuan, China.
3 School of Information, Central University of Finance and Economics, Beijing, China.
* Correspondence: bailu@bnu.edu.cn
Received: 23 October 2023; Accepted: 10 December 2024; Published: 30 July 2025.
Abstract: Over-smoothing is a fundamental limitation of deep graph neural networks (GNNs), where node representations become indistinguishable as message-passing depth increases. Recent studies demonstrate that over-smoothing does not affect graph structures uniformly, but instead manifests most severely in structurally complex, high-entropy regions of graphs. While existing methods mitigate this issue using fixed entropy measures and heuristic node selection, they lack adaptivity and generalization across graph domains. In this paper, we propose a learnable multi-entropy localization framework for preventing over-smoothing in deep GNNs. The proposed method integrates multiple graph entropy measures—Shannon entropy, von Neumann entropy, and spectral entropy—into a unified, trainable entropy fusion module that identifies over-smoothing-prone regions dynamically. A differentiable entropy-aware gating mechanism selectively regulates message passing only in critical regions, preserving expressive power while maintaining depth scalability. Extensive experiments on benchmark graph classification and node classification datasets demonstrate that the proposed approach consistently outperforms state-of-the-art over-smoothing mitigation techniques, particularly in deep GNN settings. Our framework offers a principled and general solution toover-smoothing by transforming entropy from a diagnostic metric into a learnable structural signal.
Keywords: Graph Neural Networks, Over-Smoothing, Graph Entropy, Multi-Entropy Learning, Deep Learning on Graphs.