Journal Articles, Association for Computing Machinery (ACM)
Permanent URI for this collection
Browse
Recent Submissions
- A Dynamic Characteristic Aware Index Structure Optimized for Real-world DatasetsYang, Jin; Yoon, Heejin; Yun, Gyeongchan; Noh, Sam; Choi, Young-ri (ACM, 2024-12)Many datasets in real life are complex and dynamic, that is, their key densities are varied over the whole key space and their key distributions change over time. It is challenging for an index structure to efficiently support all key operations for data management, in particular, search, insert, and scan, for such dynamic datasets. In this paper, we present DyTIS (Dynamic dataset Targeted Index Structure), an index that targets dynamic datasets. DyTIS, though based on the structure of Extendible hashing, leverages the CDF of the key distribution of a dataset, and learns and adjusts its structure as the dataset grows. The key novelty behind DyTIS is to group keys by the natural key order and maintain keys in sorted order in each bucket to support scan operations within a hash index. We also define what we refer to as a dynamic dataset and propose a means to quantify its dynamic characteristics. Our experimental results show that DyTIS provides higher performance than the state-of-the-art learned index for the dynamic datasets considered. We also analyze the effects of the dynamic characteristics of datasets, including sequential datasets, as well as the effect of multiple threads on the performance of the indexes.
- Experimental Validation of a 3GPP compliant 5G-based Positioning SystemDhungel, Sarik; Duggal, Gaurav; Ron, Dara; Tripathi, Nishith; Buehrer, R. Michael; Reed, Jeffrey H.; Shah, Vijay K. (ACM, 2024-12-04)The advent of 5G positioning techniques by 3GPP has unlocked possibilities for applications in public safety, vehicular systems, and location-based services. However, these applications demand accurate and reliable positioning performance, which has led to the proposal of newer positioning techniques. To further advance the research on these techniques, in this paper, we develop a 3GPP-compliant 5G positioning testbed, incorporating gNodeBs (gNBs) and User Equipment (UE). The testbed uses New Radio (NR) Positioning Reference Signals (PRS) transmitted by the gNB to generate Time of Arrival (TOA) estimates at the UE. We mathematically model the inter-gNB and UE-gNB time offsets affecting the TOA estimates and examine their impact on positioning performance. Additionally, we propose a calibration method for estimating these time offsets. Furthermore, we investigate the environmental impact on the TOA estimates. Our findings are based on our mathematical model and supported by experimental results.
- Automated and Blind Detection of Low Probability of Intercept RF Anomaly SignalsGusain, Kuanl; Hassan, Zoheb; Couto, David; Malek, Mai Abdel; Shah, Vijay K; Zheng, Lizhong; Reed, Jeffrey H. (ACM, 2024-12-04)Automated spectrum monitoring necessitates the accurate detection of low probability of intercept (LPI) radio frequency (RF) anomaly signals to identify unwanted interference in wireless networks. However, detecting these unforeseen low-power RF signals is fundamentally challenging due to the scarcity of labeled RF anomaly data. In this paper, we introduce WANDA (Wireless ANomaly Detection Algorithm), an automated framework designed to detect LPI RF anomaly signals in low signal-to-interference ratio (SIR) environments without relying on labeled data. WANDA operates through a two-step process: (i) Information extraction, where a convolutional neural network (CNN) utilizing soft Hirschfeld-Gebelein-Rényi correlation (HGR) as the loss function extracts informative features from RF spectrograms; and (ii) Anomaly detection, where the extracted features are applied to a one-class support vector machine (SVM) classifier to infer RF anomalies. To validate the effectiveness of WANDA, we present a case study focused on detecting unknown Bluetooth signals within the WiFi spectrum using a practical dataset. Experimental results demonstrate that WANDA outperforms other methods in detecting anomaly signals across a range of SIR values (-10 dB to 20 dB).
- CLOUD-D RF: Cloud-based Distributed Radio Frequency Heterogeneous Spectrum SensingGreen, Dylan; McIrvin, Caleb; Thaboun, River; Wemlinger, Cora; Risi, Joseph; Jones, Alyse; Toubeh, Maymoonah; Headley, William (ACM, 2024-12-04)In wireless communications, collaborative spectrum sensing is a process that leverages radio frequency (RF) data from multiple RF sensors to make more informed decisions and lower the overall risk of failure in distributed settings. However, most research in collaborative sensing focuses on homogeneous systems using identical sensors, which would not be the case in a real world wireless setting. Instead, due to differences in physical location, each RF sensor would see different versions of signals propagating in the environment, establishing the need for heterogeneous collaborative spectrum sensing. Hence, this paper explores the implementation of collaborative spectrum sensing across heterogeneous sensors, with sensor fusion occurring in the cloud for optimal decision making. We investigate three different machine learning-based fusion methods and test the fused model’s ability to perform modulation classification, with a primary goal of optimizing for network bandwidth in regard to next-generation network applications. Our analysis demonstrates that our fusion process is able to optimize the number of features extracted from the heterogeneous sensors according to their varying performance limitations, simulating adverse conditions in a real-world wireless setting.
- An Interactive Visual Presentation of Core Database Design ConceptsAbdelaziz, Noha; Farghally, Mohammed; Mohammed, Mostafa; Soliman, Taysir (ACM, 2024-12-05)Database design is a core topic in Computer Science (CS) curricula at the university level. Students often encounter difficulties and misconceptions while learning these concepts. Previous research attempted to address these learning difficulties through interactive visual demonstrations. However, most of these resources are not well integrated into the curriculum, and lack a proper educational evaluation. In this paper, we present a set of online interactive visualizations that we name DataBase Visualizations (DBVs), that address common database design learning difficulties in an introductory undergraduate database course. Core database design concepts are visualized step-by-step, facilitating a deep understanding of relationship establishment and mapping onto a relational schema. DBVs could be easily embedded in an online eTextbook facilitating integration with the existing curriculum. We present our findings from an evaluation study of the effectiveness of DBVs when applied to a semester-long undergraduate database course in a large public institution in the middle east. Results indicate that intervention group students had significantly higher scores on a post-test offered as part of the final compared to control group students using primarily traditional textual content. Furthermore, intervention group students were surveyed at the end of the semester asking them about the value of DBVs to their learning process and suggestions for improvement. Survey results indicate that DBVs were clear, engaging, and easy to use. We believe that DBVs will be helpful to undergraduate database instructors in their teaching of basic database design concepts.
- Mutating Matters: Analyzing the Influence of Mutation Testing in Programming CoursesMansur, Rifat Sabbir; Shaffer, Clifford; Edwards, Stephen (ACM, 2024-12-05)Mutation testing is used to gauge the quality of software test suites by introducing small faults, called “mutations”, into code to assess if a test suite can detect them. Although it has been applied extensively in the software industry, mutation testing’s use in programming courses faces both computational and pedagogical barriers. This study examines the impact of mutation testing on student performance in a post-CS2 Data Structures and Algorithms course with 3-4 week life-cycle programming projects. We collected a semester of data with projects using only code coverage (control group) and another semester that used mutation testing (experimental group). We investigated three aspects of mutation testing impact: the quality of student-written test suites, the correctness and complexity of students’ solution code, and the degree of incremental test writing. Our findings suggest that students using mutation testing, as a group, demonstrated higher quality test suites and wrote better solution code compared to students using traditional code coverage methods. Students using mutation testing were more likely to exhibit incremental testing practices.
- AI in and for K-12 Informatics Education. Life after Generative AI.Barendsen, Erik; Lonati, Violetta; Quille, Keith; Altin, Rukiye; Divitini, Monica; Hooshangi, Sara; Karnalim, Oscar; Kiesler, Natalie; Melton, Madison; Suero Montero, Calkin; Morpurgo, Anna (ACM, 2024-12-05)The use and adoption of Generative AI (GenAI) has revolutionised various sectors, including computing education. However, this narrow focus comes at a cost to the wider AI in and for educational research. This working group aims to explore current trends and explore multiple sources of information to identify areas of AI research in K-12 informatics education that are being underserved but needed in the post-GenAI AI era. Our research focuses on three areas: curriculum, teacher-professional learning and policy. The denouement of this aims to identify trends and shortfalls for AI in and for K-12 informatics education. We will systematically review the current literature to identify themes and emerging trends in AI education at K-12. This will be done under two facets, curricula and teacher-professional learning. In addition, we will conduct interviews and surveys with educators and AI experts. Next, we will examine the current policy (such as the European AI Act, and European Commission guidelines on the use of AI and data in education and training as well as international counterparts). Policies are often developed by both educators and experts in the domain, thus providing a source of topics or areas that may be added to our findings. Finally, by synthesising insights from educators, AI experts, and policymakers, as well as the literature and policy, our working group seeks to highlight possible future trends and shortfalls.
- sMVX: Multi-Variant Execution on Selected Code PathsYeoh, Sengming; Wang, Xiaoguang; Jang, Jae-Won; Ravindran, Binoy (ACM, 2024-12-02)Multi-Variant Execution (MVX) is an effective way to detect memory corruption vulnerabilities, intrusions, or live software updates. A traditional MVX system concurrently runs multiple copies of functionally identical, layout-different program variants. Therefore, a typical memory corruption attack that forges pointers can succeed on at most one variant, leading the other variant(s) to crash. The replicated execution adds software security and reliability but also brings multiple times of CPU and memory usage. This paper presents sMVX, a flexible multi-variant execution system replicating variants only on the selected code paths. sMVX allows end-users to annotate a target program and indicate sensitive code regions for multi-variant execution. Such code regions can be authentication-related code or sensitive functions that handle potentially malicious input data. An sMVX runtime only replicates the sensitive functions and executes them in lockstep. We have implemented a prototype of sMVX using an in-process code monitor. The sMVX monitor supports the selected code paths MVX from within the target program’s address space, but the monitor is isolated from the target’s code by the Intel Memory Protection Keys (MPK). We evaluated the sMVX using a benchmark suite and two server applications. The evaluation demonstrates that sMVX exhibits a comparable performance overhead to state-of-the-art MVX systems but requires 20% fewer CPU cycles and 49% less memory consumption on server applications.
- Blocking Tracking JavaScript at the Function GranularityAmjad, Abdul Haddi; Munir, Shaoor; Shafiq, Zubair; Gulzar, Muhammad Ali (ACM, 2024-12-02)Modern websites extensively rely on JavaScript to implement both functionality and tracking. Existing privacy-enhancing content blocking tools struggle against mixed scripts, which simultaneously implement both functionality and tracking. Blocking such scripts would break functionality, and not blocking themwould allowtracking. We propose NoT.js, a fine-grained JavaScript blocking tool that operates at the function-level granularity. NoT.js’s strengths lie in analyzing the dynamic execution context, including the call stack and calling context of each JavaScript function, and then encoding this context to build a rich graph representation. NoT.js trains a supervised machine learning classifier on a webpage’s graph representation to first detect tracking at the function-level and then automatically generates surrogate scripts that preserve functionality while removing tracking. Our evaluation of NoT.js on the top-10K websites demonstrates that it achieves high precision (94%) and recall (98%) in detecting tracking functions, outperforming the state-of-the-art while being robust against off-the-shelf JavaScript obfuscation. Fine-grained detection of tracking functions allows NoT.js to automatically generate surrogate scripts, which our evaluation shows that successfully remove tracking functions without causing major breakage. Our deployment of NoT.js shows that mixed scripts are present on 62.3% of the top-10K websites, with 70.6% of the mixed scripts being third-party that engage in tracking activities such as cookie ghostwriting.
- Verifiably Correct Lifting of Position-Independent x86-64 Binaries to Symbolized AssemblyVerbeek, Freek; Naus, Nico; Ravindran, Binoy (ACM, 2024-12-02)We present an approach to lift position-independent x86-64 binaries to symbolized NASM. Symbolization is a decompilation step that enables binary patching: functions can be modified, and instructions can be interspersed. Moreover, it is the first abstraction step in a larger decompilation chain. The produced NASM is recompilable, and we extensively test the recompiled binaries to see if they exhibit the same behavior as the original ones. In addition to testing, the produced NASM is accompanied with a certificate, constructed in such a way that if all theorems in the certificate hold, symbolization has occurred correctly. The original and recompiled binary are lifted again with a third-party decompiler (Ghidra). These representations, as well as the certificate, are loaded into the Isabelle/HOL theorem prover, where proof scripts ensure that correctness can be proven automatically. We have applied symbolization to various stripped binaries from various sources, from various compilers, and ranging over various optimization levels.We show how symbolization enables binary-level patching, by tackling challenges originating from industry.
- A First Look at Security and Privacy Risks in the RapidAPI EcosystemLiao, Song; Cheng, Long; Luo, Xiapu; Song, Zheng; Cai, Haipeng; Yao, Danfeng (Daphne); Hu, Hongxin (ACM, 2024-12-02)With the emergence of the open API ecosystem, third-party developers can publish their APIs on the API marketplace, significantly facilitating the development of cutting-edge features and services. The RapidAPI platform is currently the largest API marketplace and it provides over 40,000 APIs, which have been used by more than 4 million developers. However, such open API also raises security and privacy concerns associated with APIs hosted on the platform. In this work, we perform the first large-scale analysis of 32,089 APIs on the RapidAPI platform. By searching in the GitHub code and Android apps, we find that 3,533 RapidAPI keys, which are important and used in API request authorization, have been leaked in the wild. These keys can be exploited to launch various attacks, such as Resource Exhaustion Running, Theft of Service, Data Manipulation, and User Data Breach attacks. We also explore risks in API metadata that can be abused by adversaries. Due to the lack of a strict certification system, adversaries can manipulate the API metadata to perform typosquatting attacks on API URLs, impersonate other developers or renowned companies, and publish spamming APIs on the platform. Lastly, we analyze the privacy non-compliance of APIs and applications, e.g., Android apps, that call these APIs with data collection. We find that 1,709 APIs collect sensitive data and 94% of them don’t provide a complete privacy policy. For the Android apps that call these APIs, 50% of them in our study have privacy non-compliance issues.
- RESONANT: Reinforcement Learning-based Moving Target Defense for Credit Card Fraud DetectionAbdel Messih, George; Cody, Tyler; Beling, Peter; Cho, Jin-Hee (ACM, 2024-11-11)According to security.org, as of 2023, 65% of credit card (CC) users in the US have been subjected to fraud at some point in their lives, which equates to about 151 million Americans. The proliferation of advanced machine learning (ML) algorithms has contributed to detecting credit card fraud (CCF). However, using a single or static ML-based defense model against a constantly evolving adversary takes its structural advantage, which enables the adversary to reverse engineer the defense’s strategy over the rounds of an iterated game. This paper proposes an adaptive moving target defense (MTD) approach based on deep reinforcement learning (DRL), termed RESONANT, to identify the optimal switching points to another ML classifier for credit card fraud detection. It identifies optimal moments to strategically switch between different ML-based defense models (i.e., classifiers) to invalidate any adversarial progress and always take a step ahead of the adversary. We take this approach in an iterated game theoretic manner where the adversary and defender take action in turns in the CCF detection contexts. Via extensive simulation experiments, we investigate the performance of our proposed RESONANT against that of the existing state-of-the-art counterparts in terms of the mean and variance of detection accuracy and attack success ratio to measure the defensive performance. Our results demonstrate the superiority of RESONANT over other counterparts, including static and naïve ML and MTD selecting a defense model at random (i.e., Random-MTD). Via extensive simulation experiments, our results show that our proposed RESONANT can outperform the existing counterparts up to two times better performance in detection accuracy using AUC (i.e., Area Under the Curve of the Receiver Operating Characteristic (ROC) curve) and system security against attacks using attack success ratio (ASR).
- Machine Learning-Driven Optimization of Livestock Management: Classification of Cattle Behaviors for Enhanced Monitoring EfficiencyZhao, Zhuqing; Shehada, Halah; Ha, Dong; Dos Reis, Barbara; White, Robin; Shin, Sook (ACM, 2024-08-02)Monitoring cattle health in remote and expansive pastures poses significant challenges that necessitate automated, continuous, and real-time behavior monitoring. This paper investigates the effectiveness and reliability sensor-based cattle behavior classification for such monitoring, emphasizing the impact of intelligent feature selection in enhancing classification performance. To achieve this, we developed Wireless Sensor Nodes (WSN) affixed to individual cattle, enabling the capture of 3-axis acceleration data from five cows across varying seasons, spanning from summer to winter. Initially, we extracted a comprehensive set of 52 features, representing a broad spectrum of cow behaviors alongside statistical attributes. To enhance computational efficiency, we employed the Recursive Feature Elimination (RFE) method to distill 30 critical features by discarding redundant or less significant ones. Subsequently, these optimized features were utilized to train four machine learning (ML) models: Support Vector Machine (SVM), k-Nearest Neighbors (k- NN), Random Forest (RF), and Histogram-based Gradient Boosted Decision Trees (HGBDT). Notably, the HGBDT model demonstrated superior performance, achieving remarkable F1-scores of 99.01% for ’grazing’, 98.74% for ’ruminating’, 89.62% for ’lying’, 84.06% for ’standing’, and 91.87% for ’walking’. These findings underscore the potential of our approach to serve as a robust framework for precision livestock farming, offering valuable insights into enhancing cattle health monitoring in remote environments.
- SegIt: Empowering Sensor Data Labeling with Enhanced Efficiency and SecurityZhang, Zhen; Abraham, Samuel; Lee, Alex; Li, Yichen; Morota, Gota; Ha, Dong; Shin, Sook (ACM, 2024-08-02)SegIt is a novel, user-friendly, and highly efficient sensor data labeling tool designed to tackle critical challenges such as data privacy, synchronization accuracy, and memory efficiency inherent in existing labeling tools. While many current sensor data labeling tools provide free online services, they typically necessitate users to upload unlabeled sensor data, alongside video or audio references, to cloud storage for labeling. Nevertheless, such third-party storage exposes user data to potential security risks. SegIt, an innovative open-source tool, provides a software solution for tagging unlabeled sensor data directly on a local computer, ensuring enhanced accuracy, convenience, and, most importantly, data security.
- DeePSP-GIN: identification and classification of phage structural proteins using predicted protein structure, pretrained protein language model, and graph isomorphism networkEmon, Muhit Islam; Das, Badhan; Thukkaraju, Ashrith; Zhang, Liqing (ACM, 2024-11-22)Phages are vital components of the microbial ecosystem, and their functions and roles are largely determined by their structural proteins. Accurately annotating phage structural proteins (PSPs) is essential for understanding phage biology and their interactions with bacterial hosts, which can pave the way for innovative strategies to combat bacterial infections and develop phage-based therapies. However, the sequence diversity of PSPs makes their identification and annotation challenging. While various computational methods are available for predicting PSPs, they currently lack the integration of protein structural information, an important aspect for understanding protein function. With the advent of deep learning models, protein structures can be predicted accurately and quickly from protein sequences, creating new opportunities for PSP prediction and analysis. We developed DeePSP-GIN, a graph isomorphism network (GIN) - based deep learning model leveraging predicted protein structures and protein language model for PSP identification and classification. To the best of our knowledge, DeePSP-GIN is the first method utilizing predicted protein structural information for PSP prediction tasks. It offers dual functionality of identifying PSP and non-PSP sequences and classifying PSPs into seven major classes. DeePSP-GIN converts predicted protein structures from PDB 3D coordinates into graphs and extracts node features from protein language model-generated embeddings. The GIN is then applied to the constructed graphs to learn the discriminating features. The experimental results show that DeePSP-GIN outperforms the state-of-the-art methods in both PSP identification and classification tasks in terms of F1-score. DeePSP-GIN achieves a 1.04% higher F1-score than the nearest competing method in the PSP identification task. Additionally, its overall F1-score in the PSP classification task is approximately 34.38% higher than that of the second-best method. The source code of DeePSP-GIN is available at https://github.com/muhit-emon/DeePSP-GIN under the MIT license.
- FHIRViz: Multi-Agent Platform for FHIR Visualization to Advance Healthcare AnalyticsALMutairi, Mariam; AlKulaib, Lulwah; Wang, Shengkun; Chen, Zhiqian; Almutairi, Youssif; Alenazi, Thamer; Luther, Kurt; Lu, Chang-Tien (ACM, 2024-11-22)The shift to electronic health records (EHRs) has enhanced patient care and research, but data sharing and complex clinical terminology remain challenges. The Fast Healthcare Interoperability Resource (FHIR) addresses interoperability issues, though extracting insights from FHIR data is still difficult. Traditional analytics often miss critical clinical context, and managing FHIR data requires advanced skills that are in short supply. This study presents FHIRViz, a novel analytics tool that integrates FHIR data with a semantic layer via a knowledge graph. It employs a large language model (LLM) system to extract insights and visualize them effectively. A retrieval vector store improves performance by saving successful generations for fine-tuning. FHIRViz translates clinical queries into actionable insights with high accuracy. Results show FHIRViz with GPT-4 achieving 92.62% accuracy, while Gemini 1.5 Pro reaches 89.34%, demonstrating the tool’s potential in overcoming healthcare data analytics challenges.
- An Empirical Evaluation of Method Signature Similarity in Java CodebasesKhan, Mohammad; Elhussiny, Mohamed; Tobin, William; Gulzar, Muhammad (ACM, 2024-09-11)Modern programming languages have transformed software development by providing capabilities of enhancing productivity and reducing code redundancy. One such feature is allowing developers to choose meaningful method names for implementation and functionality. As programs evolve into APIs and libraries, developers often design methods with similar signatures to streamline code management and improve comprehensibility. In this paper, we conduct a comprehensive study to evaluate the prevalence, usage, and perception of methods with similar signatures, including both conventionally overloaded and textually similar methods. Through analyzing 6.4 million lines of code across 167 well-established Java repositories on GitHub, we statistically assess the occurrence of these methods and their impact on usability and software quality. Additionally, we explore the evolution of these methods through a longitudinal analysis of historical commit snapshots. Our research reveals that both overloaded and textually similar methods are common in leading Java repositories and are primarily driven by specific software design requirements, program logic, and developer’s programming habits. As software matures, development shifts towards maintenance tasks that rarely necessitate design changes. Our longitudinal analysis corroborates this by indicating minimal changes in methods with similar signatures in the later stages of a repository’s life.
- Enforcing C/C++ Type and Scope at Runtime for Control-Flow and Data-Flow IntegrityIsmail, Mohannad; Jelesnianski, Christopher; Jang, Yeongjin; Min, Changwoo; Xiong, Wenjie (ACM, 2024-04-27)Control-flow hijacking and data-oriented attacks are becoming more sophisticated. These attacks, especially dataoriented attacks, can result in critical security threats, such as leaking an SSL key. Data-oriented attacks are hard to defend against with acceptable performance due to the sheer amount of data pointers present. The root cause of such attacks is using pointers in unintended ways; fundamentally, these attacks rely on abusing pointers to violate the original scope they were used in or the original types that they were declared as. This paper proposes Scope Type Integrity (STI), a new defense policy that enforces all pointers (both code and data pointers) to conform to the original programmer’s intent, as well as Runtime Scope Type Integrity (RSTI) mechanisms to enforce STI at runtime leveraging ARM Pointer Authentication. STI gathers information about the scope, type, and permissions of pointers. This information is then leveraged by RSTI to ensure pointers are legitimately utilized at runtime. We implemented three defense mechanisms of RSTI, with varying levels of security and performance tradeoffs to showcase the versatility of RSTI. We employ these three variants on a variety of benchmarks and real-world applications for a full security and performance evaluation of these mechanisms. Our results show that they have overheads of 5.29%, 2.97%, and 11.12%, respectively.
- Red is Sus: Automated Identification of Low-Quality Service Availability Claims in the US National Broadband MapNabi, Syed Tauhidun; Wen, Zhuowei; Ritter, Brooke; Hasan, Shaddi (ACM, 2024-11-04)The FCC’s National Broadband Map aspires to provide an unprecedented view into broadband availability in the US. However, this map, which also determines eligibility for public grant funding, relies on self-reported data from service providers that in turn have incentives to strategically misrepresent their coverage. In this paper, we develop an approach for automatically identifying these low-quality service claims in the National Broadband Map. To do this, we develop a novel dataset of broadband availability consisting of 750k observations from more than 900 US ISPs, derived from a combination of regulatory data and crowdsourced speed tests. Using this dataset, we develop a model to classify the accuracy of service provider regulatory filings and achieve AUCs over 0.98 for unseen examples. Our approach provides an effective technique to enable policymakers, civil society, and the public to identify portions of the National Broadband Map that are likely to have integrity challenges.
- Technology Use in the Black Church: Perspectives of Black Church Leaders Preliminary FindingsThompson, Gabriella; Otoo, Nissi; Fisher, Jaden; Sibi, Irene; Smith, Angela; Ogbonnaya-Ogburu, Ihudiya (ACM, 2024-11-11)Historically, the Black church has played a pivotal role in civic engagement and social justice, and continues to do so today. Yet, few researchers have explored how decisions around technology use are made in the church. To address this gap, we conducted semi-structured interviews with five Black church leaders to understand how church leaders interact with digital technologies, both in general and specifically with the communities that they serve. We found that while Black Church leaders are eager to engage with technology, most of the engagement with outside communities is through in-person contact; opportunities to give online have a financial penalty in comparison to traditional methods of tithing and donating; lastly, technology use within outreach and ministries is highly dependent by ministry leaders – many whom volunteer their time.We contribute to research that focuses on technology use in religious organizations and community engagement of community-based organizations.