Yang, Jiue-AnChen, YuzhouTribby, CalvinLee, HuikyoErhunmwunsee, LorettaBenmarhnia, TarikThompson, CarolineGel, YuliaJankowska, Marta2026-01-092026-01-092025-11-03https://hdl.handle.net/10919/140726Traditional cancer rate estimations are often limited in spatial resolutions and lack considerations of environmental factors. Satellite imagery has become a vital data source for monitoring diverse urban environments, supporting applications across environmental, socio-demographic, and public health domains. However, while deep learning (DL) tools, particularly convolutional neural networks, have demonstrated strong performance in extracting features from high-resolution imagery, their reliance on local spatial cues often limits their ability to capture complex, non-local, and higher-order structural information. To overcome this limitation, we propose a novel LLM-based multi-agent coordination system for satellite image analysis, which integrates visual and contextual reasoning through a simplicial contrastive learning framework (Agent- SNN). Our Agent-SNN contains two augmented superpixel-based graphs and maximizes mutual information between their latent simplicial complex representations, thereby enabling the system to learn both local and global topological features. The LLM-based agents generate structured prompts that guide the alignment of these representations across modalities. Experiments with satellite imagery of Los Angeles and San Diego demonstrate that Agent-SNN achieves significant improvements over state-of-the-art baselines in regional cancer prevalence estimation tasks.application/pdfenCreative Commons Attribution 4.0 InternationalLLM-Based Multi-Agent System and Simplicial Self-Supervised Learning Model for Regional Cancer Prevalence Estimation Using Satellite ImageryArticle - Refereed2026-01-01The author(s)https://doi.org/10.1145/3748636.3763225