Enhancing Software Maintenance: A Research Investigation on Current Practices, Potential Improvements, and Procedure Automation

Loading...
Thumbnail Image

Files

TR Number

Date

2025-12-18

Journal Title

Journal ISSN

Volume Title

Publisher

Virginia Tech

Abstract

Software maintenance is one of the most important phases of the Software Development Life Cycle, as it prevents unexpected development issues and ensures long-term reliability. Proper maintenance reduces cost and protects software from security and run-time problems. However, software maintenance is widely recognized as a challenging and resource-demanding phase of the life cycle. Developers frequently encounter difficulties such as managing dependency updates, addressing metadata inconsistencies, and adapting to evolving project requirements. These recurring issues motivate the need for better insights into maintenance practices and the exploration of automated techniques that can improve reliability and efficiency. To address these challenges, this dissertation presents (1) an examination of current maintenance practices by developers, (2) an investigation of the feasibility of using Large Language Models for software maintenance, and (3) the development of a new tool to automatically detect bugs and support automated maintenance. First, we identified security-related best practices for JavaScript developers and examined how well they are followed in open-source projects. Our empirical study of 841 applications revealed frequent violations and showed that developers often ignore best practices due to perceived irrelevance or distrust in tools. These findings highlight limitations in human-driven maintenance and motivate the exploration of automated assistance. Next, we evaluated how Large Language Model (LLM) tools such as ChatGPT perform in maintenance tasks compared to human developers. By analyzing their performance in technical QandA and software revision, we found that ChatGPT provided better answers for 97 out of 130 Stack Overflow questions and successfully revised software for 22 out of 48 maintenance requests. While promising, our results indicate that LLMs struggle with context-specific precision. Finally, we developed a domain-specific language (DSL) and language engine tool (MECHECK) to detect metadata-related bugs in Java programs. We defined 15 rules from Spring and JUnit documentation and evaluated MECHECK using two datasets of 115 enterprise applications. MECHECK detected bugs with 100% precision, 96% recall, and 98% F-score, and identified 152 real-world bugs, 49 of which were confirmed fixes. These results demonstrate that MECHECK helps ensure the correct use of metadata and advances automated software maintenance. In summary, this research provides insight into software maintenance, its challenges, and how the process can be improved from understanding developer behavior, to leveraging AI assistance, to creating automated detection tools.

Description

Keywords

Software Engineering, Software Maintenance

Citation