Influence Functions for Edge Edits in Non-Convex Graph Neural Networks
Jaeseung Heo, Kyeongheung Yun, Seokwon Yoon, MoonJeong Park, Jungseul Ok, Dongwoo Kim·June 05, 2025
Summary
A proximal Bregman response function enhances non-convex graph neural networks, improving edge influence prediction accuracy. This unified approach addresses limitations by considering edge insertions and message propagation changes, offering interpretability and robustness. Applied to analyze edge rewiring, identify adversarial edits, and assess node classification, it demonstrates high correlation between estimated and actual influence. The analysis on a four-layer GCN across Influence, CiteSeer, and PubMed shows strong correlations (0.94-0.98). Two-layer ChebNet and GAT models exhibit correlations (0.8-0.99) and Dirichlet Energy (0.8-0.94). The study validates the effectiveness of the function in deletion and insertion scenarios, with 143 out of 200 inserted edges reducing over-squashing. However, validation loss and over-smoothing may not improve. Conducted on a four-layer GCN model trained on the Texas dataset, experiments were performed on various GPUs with model hyperparameters tuned over a specific search space.
Introduction
Background
Overview of non-convex graph neural networks
Challenges in edge influence prediction
Objective
To introduce a unified approach for enhancing edge influence prediction accuracy using a proximal Bregman response function
Method
Data Collection
Description of datasets used (Influence, CiteSeer, PubMed)
Data Preprocessing
Techniques for preparing data for model training
Model Architecture
Description of the four-layer GCN, two-layer ChebNet, and GAT models
Proximal Bregman Response Function
Detailed explanation of the function and its role in improving edge influence prediction
Edge Influence Analysis
Techniques for analyzing edge rewiring, identifying adversarial edits, and assessing node classification
Results
Correlation Analysis
Correlation between estimated and actual influence for different models
Correlation values for four-layer GCN, two-layer ChebNet, and GAT models
Deletion and Insertion Scenarios
Validation of the function's effectiveness in edge deletion and insertion scenarios
Analysis of 143 out of 200 inserted edges reducing over-squashing
Limitations
Discussion on validation loss and over-smoothing challenges
Application
Case Studies
Analysis on the Texas dataset using various GPUs
Model Hyperparameters
Description of the search space for tuning model hyperparameters
Conclusion
Summary of Findings
Recap of the study's main contributions
Future Work
Suggestions for further research and improvements
Basic info
papers
machine learning
artificial intelligence
Advanced features