Goel, JaidevChen, YuzhouGel, Yulia2025-08-122025-08-122025-05-08https://hdl.handle.net/10919/137456Prompt-based fine-tuning of pre-trained models has recently emerged as a promising trend for few-shot learning over graphs. Despite its significant potential, high variability and sensitivity to noise and perturbations remain the major challenges on the way of a wider adoption of prompt-based fine-tuning. We propose a new solution to these open problems by introducing the machinery of persistent homology to graph prompts. In particular, to better guide the fine-tuning process on downstream tasks, we extract intrinsic topological descriptors of the activation graphs of the pre-trained models in a form of Fréchet Means and incorporate this inherent topological information into the prompt-tuning process. Additionally, we implement bootstrapping over the topological summaries to mitigate the high variability, typically observed in prompt-based methods. Our extensive validation shows that the new Topo-Prompt tool results not only in relative gains in node classification accuracy up to 11% but also in up to 4 times reduction of variability with respect to the state-of-the-art prompt tuning methods. Furthermore, Topo-Prompt delivers superior robustness to perturbations, outperforming its competitors up to 25% under noisy conditions.application/pdfenCreative Commons Attribution 4.0 InternationalFew-shot Learning over Graphs Using Topological PromptsArticle - Refereed2025-08-01The author(s)https://doi.org/10.1145/3701716.3715549