Deep Learning Methods for Built Environment Operational Management

Loading...
Thumbnail Image

TR Number

Date

2025-09-19

Journal Title

Journal ISSN

Volume Title

Publisher

Virginia Tech

Abstract

This dissertation investigated the development of efficient, reliable, and scalable time series (TS) deep learning (DL) frameworks toward enhancing operational management in the built environment, with case studies on (i) reliable infrastructure anomaly detection (AD) and (ii) scalable energy forecasting. An unsupervised, univariate probabilistic anomaly detection framework—DEGAN: Density Estimation-based Generative Adversarial Networks (GANs)—was studied to enhance detection accuracy, with an emphasis on balancing the recall-precision trade-off, using a real-world case study of railroad track monitoring. By leveraging repeated inspection data, employing standalone discriminator models trained solely on normal time series samples, and using kernel density estimation for probabilistic AD, DEGAN achieved a balanced F1 score of 0.83 (R = 0.8 | P = 0.86) and outperformed classical unsupervised machine learning baseline methods. The findings demonstrated the potential of DL architectures to effectively encode domain-specific human knowledge in infrastruc- ture monitoring tasks. The second study extended the univariate DEGAN framework for effective and efficient multivariate time series AD. A flexible framework was introduced to support both one-dimensional (1D) and two-dimensional (2D) DL architectures, including Autoencoders (AEs and VAEs) and GANs. Using this framework, 14 combinations of data embedding techniques (ensemble, reshaping, stacking, TS-to-image conversion) and model types (1D and 2D DL models) were evaluated. Using multi-channel railroad track inspection data, a 2D convolutional AE with channel stacking and a 1D convolutional GAN with reshaping (flattening multi-channel sequences into vectors) were identified as the best-performing models. Both achieved an F1 score of 0.86 and demonstrated higher computational efficiency than classical ML models. Expanding the scope beyond context-specific models, the third study addressed the scalability and generalizability of DL models. Given the need for large and heterogeneous datasets, scalable DL models were studied in the context of energy forecasting tasks through the lens of foundation models (FMs)—large models trained on such datasets. A comprehensive literature synthesis was first conducted on Time Series Foundation Models (TSFMs), which represent promising alternatives to specialist energy forecasting models. The synthesis covered general-purpose TSFMs, including native TSFMs (trained exclusively on TS data) and large language model (LLM)-adapted variants. Using data from more than 1,000 buildings, a comprehensive comparative study was then conducted and showed that GEM (a dedicated FM trained solely on the large energy dataset) and a representative TSFM fine-tuned on the large energy dataset (TimesFM2.0-E) consistently outperformed baseline DL models trained on individual buildings, with zero-shot mean absolute error (MAE) improvements ranging from 16.3% to 7.3% across 24h to 168h horizons. Building-level fine-tuning of these two FMs further increased gains to 17.8%–8.5%, with adaptation times reduced to 11–35 seconds, compared to 301–963 seconds for baselines. Although general-purpose TSFMs exhibited weaker zero-shot performance, all of their building-level fine-tuned variants outperformed baselines. These findings demonstrate the effectiveness of TSFMs—particularly energy-pretrained or domain-adapted models—as scalable and high-performing solutions for building energy forecasting. Together, these studies offer insights into achieving reliable and scalable deep learning in infrastructure operational management, advancing the use of generative artificial intelligence and foundation models in real-world, data-driven built environment management.

Description

Keywords

Deep Learning, Infrastructure Operational Management, Energy Predictive Management, Time Series, Foundation Model

Citation