Browsing by Author "Jin, Ran"
Now showing 1 - 20 of 40
Results Per Page
Sort Options
- Advancing Manufacturing Quality Control Capabilities Through The Use Of In-Line High-Density Dimensional DataWells, Lee Jay (Virginia Tech, 2014-01-15)Through recent advancements in high-density dimensional (HDD) measurement technologies, such as 3D laser scanners, data-sets consisting of an almost complete representation of a manufactured part's geometry can now be obtained. While HDD data measurement devices have traditionally been used in reverse engineering application, they are beginning to be applied as in-line measurement devices. Unfortunately, appropriate quality control (QC) techniques have yet to be developed to take full advantage of this new data-rich environment and for the most part rely on extracting discrete key product characteristics (KPCs) for analysis. In order to maximize the potential of HDD measurement technologies requires a new quality paradigm. Specifically, when presented with HDD data, quality should not only be assessed by discrete KPCs but should consider the entire part being produced, anything less results in valuable data being wasted. This dissertation addresses the need for adapting current techniques and developing new approaches for the use of HDD data in manufacturing systems to increase overall quality control (QC) capabilities. Specifically, this research effort focuses on the use of HDD data for 1) Developing a framework for self-correcting compliant assembly systems, 2) Using statistical process control to detect process shifts through part surfaces, and 3) Performing automated part inspection for non-feature based faults. The overarching goal of this research is to identify how HDD data can be used within these three research focus areas to increase QC capabilities while following the principles of the aforementioned new quality paradigm.
- Balancing of Parallel U-Shaped Assembly Lines with Crossover PointsRattan, Amanpreet (Virginia Tech, 2017-09-06)This research introduces parallel U-shaped assembly lines with crossover points. Crossover points are connecting points between two parallel U-shaped lines making the lines interdependent. The assembly lines can be employed to manufacture a variety of products belonging to the same product family. This is achieved by utilizing the concepts of crossover points, multi-line stations, and regular stations. The binary programming formulation presented in this research can be employed for any scenario (e.g. task times, cycle times, and the number of tasks) in the configuration that includes a crossover point. The comparison of numerical problem solutions based on the proposed heuristic approach with the traditional approach highlights the possible reduction in the quantity of workers required. The conclusion from this research is that a wider variety of products can be manufactured at the same capital expense using parallel U-shaped assembly lines with crossover points, leading to a reduction in the total number of workers.
- Compressive Sensing Approaches for Sensor based Predictive Analytics in Manufacturing and Service SystemsBastani, Kaveh (Virginia Tech, 2016-03-14)Recent advancements in sensing technologies offer new opportunities for quality improvement and assurance in manufacturing and service systems. The sensor advances provide a vast amount of data, accommodating quality improvement decisions such as fault diagnosis (root cause analysis), and real-time process monitoring. These quality improvement decisions are typically made based on the predictive analysis of the sensor data, so called sensor-based predictive analytics. Sensor-based predictive analytics encompasses a variety of statistical, machine learning, and data mining techniques to identify patterns between the sensor data and historical facts. Given these patterns, predictions are made about the quality state of the process, and corrective actions are taken accordingly. Although the recent advances in sensing technologies have facilitated the quality improvement decisions, they typically result in high dimensional sensor data, making the use of sensor-based predictive analytics challenging due to their inherently intensive computation. This research begins in Chapter 1 by raising an interesting question, whether all these sensor data are required for making effective quality improvement decisions, and if not, is there any way to systematically reduce the number of sensors without affecting the performance of the predictive analytics? Chapter 2 attempts to address this question by reviewing the related research in the area of signal processing, namely, compressive sensing (CS), which is a novel sampling paradigm as opposed to the traditional sampling strategy following the Shannon Nyquist rate. By CS theory, a signal can be reconstructed from a reduced number of samples, hence, this motivates developing CS based approaches to facilitate predictive analytics using a reduced number of sensors. The proposed research methodology in this dissertation encompasses CS approaches developed to deliver the following two major contributions, (1) CS sensing to reduce the number of sensors while capturing the most relevant information, and (2) CS predictive analytics to conduct predictive analysis on the reduced number of sensor data. The proposed methodology has a generic framework which can be utilized for numerous real-world applications. However, for the sake of brevity, the validity of the proposed methodology has been verified with real sensor data associated with multi-station assembly processes (Chapters 3 and 4), additive manufacturing (Chapter 5), and wearable sensing systems (Chapter 6). Chapter 7 summarizes the contribution of the research and expresses the potential future research directions with applications to big data analytics.
- Context Dependent Gaze Metrics for Evaluation of Laparoscopic Surgery Manual SkillsKulkarni, Chaitanya Shashikant (Virginia Tech, 2021-06-10)With the growing adoption of laparoscopic surgery practices, high quality training and qualification of laparoscopic skills through objective assessment has become critical. While eye-gaze and instrument motion analyses have demonstrated promise in producing objective metrics for skill assessment in laparoscopic surgery, three areas deserve further research attention. First, most eye-gaze metrics do not account for trainee behaviors that change the visual scene or context that can be addressed by computer vision. Second, feedforward control metrics leveraging on the relationship between eye-gaze and hand movements has not been investigated in laparoscopic surgery. Finally, eye-gaze metrics have not demonstrated sensitivity to skill progressions of trainees as the literature has focused on differences between experts and novices although feedback on skill acquisition is most useful for trainees or educators. To advance eye-gaze assessment in laparoscopic surgery, this research presents a three-stage gaze based assessment methodology to provide a standardized process for generating context-dependent gaze metrics and estimating the proficiency levels of medical trainees on surgery. The three stages are: (1) contextual scene analysis for segmenting surgical scenes into areas of interest, (2) compute context dependent gaze metrics based on eye fixation on areas of interest, and (3) defining and estimating skill proficiency levels with unsupervised and supervised learning, respectively. This methodology was applied to analyze 499 practice trials by nine medical trainees practicing the peg transfer task in the Fundamental of Laparoscopic Surgery program. The application of this methodology generated five context dependent gaze and one tool movement metrics, defined three proficiency levels of the trainees, and developed a model predicting proficiency level of a participant for a given trial with 99% accuracy. Further, two of six metrics are completely novel, capturing feed-forward behaviors in the surgical domain. The results also demonstrated that gaze metrics could reveal skill levels more precisely than between experts and novices as suggested in the literature. Thus, the metrics derived from the gaze based assessment methodology also shows high sensitive to trainee skill levels. The implication of this research includes providing automated feedback to trainees on where they have looked during practice trial and what skill proficiency level attained after each practice trial.
- Contributions to Structured Variable Selection Towards Enhancing Model Interpretation and Computation EfficiencyShen, Sumin (Virginia Tech, 2020-02-07)The advances in data-collecting technologies provides great opportunities to access large sample-size data sets with high dimensionality. Variable selection is an important procedure to extract useful knowledge from such complex data. While in many real-data applications, appropriate selection of variables should facilitate the model interpretation and computation efficiency. It is thus important to incorporate domain knowledge of underlying data generation mechanism to select key variables for improving the model performance. However, general variable selection techniques, such as the best subset selection and the Lasso, often do not take the underlying data generation mechanism into considerations. This thesis proposal aims to develop statistical modeling methodologies with a focus on the structured variable selection towards better model interpretation and computation efficiency. Specifically, this thesis proposal consists of three parts: an additive heredity model with coefficients incorporating the multi-level data, a regularized dynamic generalized linear model with piecewise constant functional coefficients, and a structured variable selection method within the best subset selection framework. In Chapter 2, an additive heredity model is proposed for analyzing mixture-of-mixtures (MoM) experiments. The MoM experiment is different from the classical mixture experiment in that the mixture component in MoM experiments, known as the major component, is made up of sub-components, known as the minor components. The proposed model considers an additive structure to inherently connect the major components with the minor components. To enable a meaningful interpretation for the estimated model, we apply the hierarchical and heredity principles by using the nonnegative garrote technique for model selection. The performance of the additive heredity model was compared to several conventional methods in both unconstrained and constrained MoM experiments. The additive heredity model was then successfully applied in a real problem of optimizing the Pringlestextsuperscript{textregistered} potato crisp studied previously in the literature. In Chapter 3, we consider the dynamic effects of variables in the generalized linear model such as logistic regression. This work is motivated from the engineering problem with varying effects of process variables to product quality caused by equipment degradation. To address such challenge, we propose a penalized dynamic regression model which is flexible to estimate the dynamic coefficient structure. The proposed method considers modeling the functional coefficient parameter as piecewise constant functions. Specifically, under the penalized regression framework, the fused lasso penalty is adopted for detecting the changes in the dynamic coefficients. The group lasso penalty is applied to enable a sparse selection of variables. Moreover, an efficient parameter estimation algorithm is also developed based on alternating direction method of multipliers. The performance of the dynamic coefficient model is evaluated in numerical studies and three real-data examples. In Chapter 4, we develop a structured variable selection method within the best subset selection framework. In the literature, many techniques within the LASSO framework have been developed to address structured variable selection issues. However, less attention has been spent on structured best subset selection problems. In this work, we propose a sparse Ridge regression method to address structured variable selection issues. The key idea of the proposed method is to re-construct the regression matrix in the angle of experimental designs. We employ the estimation-maximization algorithm to formulate the best subset selection problem as an iterative linear integer optimization (LIO) problem. the mixed integer optimization algorithm as the selection step. We demonstrate the power of the proposed method in various structured variable selection problems. Moverover, the proposed method can be extended to the ridge penalized best subset selection problems. The performance of the proposed method is evaluated in numerical studies.
- Data Exchange for Artificial Intelligence Incubation in Manufacturing Industrial InternetZeng, Yingyan (Virginia Tech, 2024-08-21)Industrial Cyber-physical Systems (ICPSs) connect industrial equipment and manufacturing processes via ubiquitous sensors, actuators, and computer units, forming the Manufacturing Industrial Internet (MII). With the data generated from MII, Artificial Intelligence (AI) greatly advances the data-driven decision making for manufacturing efficiency, quality improvement, and cost reduction. However, data with poor quality have posed significant challenges to the incubation (i.e., training, validation, and deployment) of AI models. In the offline training phase, training data with poor quality will result in inaccurate AI models. In the online training and deployment phases, high-volume and informative-poor data lead to the discrepancy of the AI modeling performance in different phases, and also lead to high communication and computation workload, and high cost in data acquisition and storage. In the incubation of AI models for multiple manufacturing stages or systems, exchanging and sharing datasets can significantly improve the efficiency of data collection for single manufacturing enterprise, and improve the quality of training datasets. However, inaccurate estimation of the value of datasets can cause ineffective dataset exchange and hamper the scaling up of AI systems. High-quality and high-value data not only enhance the modeling performance during AI incubation, but also contribute to effective data exchange for potential synergistic intelligence in MII. Therefore, it is important to assess and ensure the data quality in terms of its value for AI models. In this dissertation, our ultimate goal is to establish a data exchange paradigm to provide high-quality and high-value data for AI incubation in MII. To achieve the goal, three research tasks are proposed for different phases in AI incubation: (1) a prediction-oriented data generation method to actively generate highly informative data in the offline training phase for high prediction performance (Chapter 2); (2) an ensemble active learning by contextual bandits framework for acquisition and evaluation of passively collected online data for the continuous improvement and resilient modeling performance during the online training and deployment phases (Chapter 3); and (3) a context-aware, performance-oriented, and privacy-preserving dataset-sharing framework to efficiently share and exchange small-but-high-quality datasets between trusted stakeholders to allow their on-demand usage (Chapter 4). All the proposed methodologies have been evaluated and validated through simulation studies and applications to real manufacturing case studies. In Chapter 5, the contribution of the work is summarized and the future research directions are proposed.
- Data Filtering and Modeling for Smart Manufacturing NetworkLi, Yifu (Virginia Tech, 2020-08-13)A smart manufacturing network connects machines via sensing, communication, and actuation networks. The data generated from the networks are used in data-driven modeling and decision-making to improve quality, productivity, and flexibility while reducing the cost. This dissertation focuses on improving the data-driven modeling of the quality-process relationship in smart manufacturing networks. The quality-process variable relationships are important to understand for guiding the quality improvement by optimizing the process variables. However, several challenges emerge. First, the big data sets generated from the manufacturing network may be information-poor for modeling, which may lead to high data transmission and computational loads and redundant data storage. Second, the data generated from connected machines often contain inexplicit similarities due to similar product designs and manufacturing processes. Modeling such inexplicit similarities remains challenging. Third, it is unclear how to select representative data sets for modeling in a manufacturing network setting, considering inexplicit similarities. In this dissertation, a data filtering method is proposed to select a relatively small and informative data subset. Multi-task learning is combined with latent variable decomposition to model multiple connected manufacturing processes that are similar-but-non-identical. A data filtering and modeling framework is also proposed to filter the manufacturing data for manufacturing network modeling adaptively. The proposed methodologies have been validated through simulation and the applications to real manufacturing case studies.
- Data Sharing and Retrieval of Manufacturing ProcessesSeth, Avi (Virginia Tech, 2023-03-28)With Industrial Internet, businesses can pool their resources to acquire large amounts of data that can then be used in machine learning tasks. Despite the potential to speed up training and deployment and improve decision-making through data-sharing, rising privacy concerns are slowing the spread of such technologies. As businesses are naturally protective of their data, this poses a barrier to interoperability. While previous research has focused on privacy-preserving methods, existing works typically consider data that is averaged or randomly sampled by all contributors rather than selecting data that are best suited for a specific downstream learning task. In response to the dearth of efficient data-sharing methods for diverse machine learning tasks in the Industrial Internet, this work presents an end-to end working demonstration of a search engine prototype built on PriED, a task-driven data-sharing approach that enhances the performance of supervised learning by judiciously fusing shared and local participant data.
- Design, Implementation and Use of In-Process Sensor Data for Monitoring Broaching and Turning Processes: A Multi - Sensor ApproachRathinam, Arvinth Chandar (Virginia Tech, 2013-06-02)Real-time quality monitoring continues to gain interest within the manufacturing domain as new and faster sensors are being developed. Unfortunately, most quality monitoring solutions are still based on collecting data from the end product. From a process improvement point of view, it is definitely more advantageous to proactively monitor quality directly in the process instead of the product, so that the consequences of a defective part can be minimized or even eliminated. In this dissertation, new methods for in-line process monitoring are explored using multiple sensors. In the first case, a new cutting force-based monitoring methodology was developed to detect out of control conditions in a broaching operation. The second part of this thesis focusses on the development of a test bed for monitoring the tool condition in a turning operation. The constructed test bed includes the combination of multiple sensors signals including, temperature, vibrations, and energy measurements. Here, the proposed SPC strategy integrates sensor data with engineering knowledge to produce quick, reliable results using proven profile monitoring techniques. While, the already existing methods are based on raw process data which requires more features to monitor without any loss of information. This technique is straight forward and able to monitor the process comprehensively with less number of features. Consequently, this also adds to the group of tools that are available for the practitioner.
- Distributed Data Filtering and Modeling for Fog and Networked ManufacturingLi, Yifu; Wang, Lening; Chen, Xiaoyu; Jin, Ran (2023-04-05)
- Efficient Prevalence Estimation for Emerging and Seasonal Diseases Under Limited ResourcesNguyen, Ngoc Thu (Virginia Tech, 2019-05-30)Estimating the prevalence rate of a disease is crucial for controlling its spread, and for planning of healthcare services. Due to limited testing budgets and resources, prevalence estimation typically entails pooled, or group, testing where specimens (e.g., blood, urine, tissue swabs) from a number of subjects are combined into a testing pool, which is then tested via a single test. Testing outcomes from multiple pools are analyzed so as to assess the prevalence of the disease. The accuracy of prevalence estimation relies on the testing pool design, i.e., the number of pools to test and the pool sizes (the number of specimens to combine in a pool). Determining an optimal pool design for prevalence estimation can be challenging, as it requires prior information on the current status of the disease, which can be highly unreliable, or simply unavailable, especially for emerging and/or seasonal diseases. We develop and study frameworks for prevalence estimation, under highly unreliable prior information on the disease and limited testing budgets. Embedded into each estimation framework is an optimization model that determines the optimal testing pool design, considering the trade-off between testing cost and estimation accuracy. We establish important structural properties of optimal testing pool designs in various settings, and develop efficient and exact algorithms. Our numerous case studies, ranging from prevalence estimation of the human immunodeficiency virus (HIV) in various parts of Africa, to prevalence estimation of diseases in plants and insects, including the Tomato Spotted Wilt virus in thrips and West Nile virus in mosquitoes, indicate that the proposed estimation methods substantially outperform current approaches developed in the literature, and produce robust testing pool designs that can hedge against the uncertainty in model inputs.Our research findings indicate that the proposed prevalence estimation frameworks are capable of producing accurate prevalence estimates, and are highly desirable, especially for emerging and/or seasonal diseases under limited testing budgets.
- Ensemble Active Learning by Contextual Bandits for AI Incubation in ManufacturingZeng, Yingyan; Chen, Xiaoyu; Jin, Ran (2023-02)The online sensing techniques and computational resources in an Industrial Cyber-physical System (ICPS) provide a digital foundation for data-driven decision making by artificial intelligence (AI) models. However, the poor data quality (e.g., inconsistent distribution, imbalanced classes) of high-speed, large-volume data streams poses significant challenges to the online deployment of the offline trained AI models. As an alternative, updating AI models online based on streaming data enables continuous improvement and resilient modeling performance. However, for a supervised learning model (i.e., a base learner), it is labor-intensive to continuously annotate all streaming samples and it is also challenging to select a subset with good quality to update the model. Hence, a data acquisition method is needed to select the data for annotation from streaming data to ensure data quality while saving annotation efforts. In the literature, active learning methods have been proposed to acquire informative samples. Different acquisition criteria were developed for exploration of under-represented regions in the input variable space or exploitation of the well-represented regions for optimal estimation of base learners. However, it remains a challenge to balance the exploration-exploitation trade-off under different online annotation scenarios. On the other hand, an acquisition criterion learned by AI (e.g., by reinforcement learning) adapts itself to a scenario dynamically, but the ambiguous consideration of the trade-off limits its performance in frequently changing manufacturing contexts. To overcome these limitations, we propose an ensemble active learning method by contextual bandits (CbeAL). CbeAL incorporates a set of active learning agents (i.e., acquisition criteria) explicitly designed for exploration or exploitation by a weighted combination of their acquisition decisions. The weight of each agent will be dynamically adjusted based on the usefulness of its decisions to improve the performance of the base learner. With adaptive and explicit consideration of both objectives, CbeAL efficiently guides the data acquisition process through selecting informative samples to reduce the human annotation efforts. Furthermore, we characterize the exploration and exploitation capability of the proposed agents theoretically. The evaluation results in a numerical simulation study and a real case study demonstrate the effectiveness and efficiency of CbeAL in manufacturing process modeling of the ICPS.
- Ensemble Active Learning by Contextual Bandits for AI Incubation in ManufacturingZeng, Yingyan; Chen, Xiaoyu; Jin, Ran (ACM, 2023-10)An Industrial Cyber-physical System (ICPS) provide a digital foundation for data-driven decision-making by artificial intelligence (AI) models. However, the poor data quality (e.g., inconsistent distribution, imbalanced classes) of high-speed, large-volume data streams poses significant challenges to the online deployment of offline-trained AI models. As an alternative, updating AI models online based on streaming data enables continuous improvement and resilient modeling performance. However, for a supervised learning model (i.e., a base learner), it is labor-intensive to annotate all streaming samples to update the model. Hence, a data acquisition method is needed to select the data for annotation to ensure data quality while saving annotation efforts. In the literature, active learning methods have been proposed to acquire informative samples. Different acquisition criteria were developed for exploration of under-represented regions in the input variable space or exploitation of the well-represented regions for optimal estimation of base learners. However, it remains a challenge to balance the exploration-exploitation trade-off under different online annotation scenarios. On the other hand, an acquisition criterion learned by AI adapts itself to a scenario dynamically, but the ambiguous consideration of the trade-off limits its performance in frequently changing manufacturing contexts. To overcome these limitations, we propose an ensemble active learning method by contextual bandits (CbeAL). CbeAL incorporates a set of active learning agents (i.e., acquisition criteria) explicitly designed for exploration or exploitation by a weighted combination of their acquisition decisions. The weight of each agent will be dynamically adjusted based on the usefulness of its decisions to improve the performance of the base learner. With adaptive and explicit consideration of both objectives, CbeAL efficiently guides the data acquisition process by selecting informative samples to reduce the human annotation efforts. Furthermore, we characterize the exploration and exploitation capability of the proposed agents theoretically. The evaluation results in a numerical simulation study and a real case study demonstrate the effectiveness and efficiency of CbeAL in manufacturing process modeling of the ICPS.
- Ensemble Modelling of in situ Feature Variables for Printed Electronics Manufacturing with in situ Process Control PotentialMohan, Karuniya (Virginia Tech, 2017-03-10)Aerosol Jet® Printing (AJP) is a direct-write based additive manufacturing process that is capable of printing electronics with fine features and various materials. It eliminates the complex masking process in traditional semiconductor manufacturing, thus enables flexible electronics design and reduces manufacturing cost. However, the quality control of AJP processes is still a challenging problem, primarily due to the lack of understanding of the potential root causes of the quality issues. There is a complex interaction among process setting variables, in situ feature variables, and quality variables in AJP processes. In this research, an ensemble model strategy is proposed to quantify the effect of the process setting variables on the in situ feature variables, and the effect of the in situ feature variables on quality variables in a two-level hierarchical way. By identifying significant in situ feature variables as responses for the process setting variables, as well as predictors for product quality in a joint estimation problem, the proposed models have a hierarchical variable relationship to enable in situ process control for variation reduction and defect mitigation. A real case study is investigated to demonstrate the advantages of the proposed method.
- Explainable and Robust Data-Driven Machine Learning Methods for Digital Healthcare MonitoringShen, Mengqi (Virginia Tech, 2023-10-24)Digital healthcare monitoring uses multidisciplinary sensing techniques to track diverse human data and behaviors. Machine learning can promote an individual's well-being through more efficient and accurate health status monitoring. However, challenges hinder precise monitoring, such as privacy concerns, varied subjects, diverse sensors, and different objectives. To help address these challenges, this thesis explores projects spanning various healthcare domains. Explainable and robust machine-learning solutions are proposed and tested, which include novel signal processing guidelines, innovative feature engineering methods, and pioneering deep-learning networks. These solutions contribute to the state-of-the-art in their respective healthcare domains. The first project addressed the challenge of assessing fall risk among individuals with varying levels of mobility using inertial sensors. Machine-learning models were developed and evaluated using datasets from stroke survivors and community-dwelling elders with participants of varying levels of mobility. Risk indicators were obtained through kinematics simplification that are both explainable and modifiable. These indicators considerably enhance fall risk classification performance compared to existing approaches and the conclusions align with available biomechanical evidence. In the second project, a new machine-learning architecture was created for fall detection and classification using multistatic radar sensing. This new approach (called eMSFRNet) solved the common problem of weak and varied Doppler signatures caused by line-of-sight restrictions. It is the first method that can classify among fall types using radar sensing, and yielded state-of-the-art accuracy for both fall detection (99.3%) and seven fall types classification (76.8%) tasks. In the third project, a novel combination of signal processing and a machine learning framework (named MIND) was designed to detect and forecast motor restricted and repetitive behaviors (RRBs) among children with autism spectrum disorder (ASD), using data from multiple wearable sensors. Contrary to prior beliefs that such detection or forecasting was unattainable, the novel MIND AI framework offers a comprehensive and generalizable approach. Transition behaviors were first defined and then identified, suggesting the potential to detect behavioral shifts preceding motor RRBs. The new signal monitoring quantification (MQ) guidelines minimize the impacts of inconsistent data caused by individualized sensor placements. MIND achieved 100% accuracy in detecting motor RRBs on new subjects with unfamiliar behavior types and 92.2% accuracy in forecasting motor RRBs. In conclusion, the work in this thesis showcases the pivotal contributions of robust and explainable machine learning solutions tailored for specific healthcare challenges. These contributions either solve longstanding problems in different healthcare fields or guide new research directions. The new methodologies introduced – including the MQ guidelines, modifiable fall risk indicators, and innovative deep learning models – all help to advance healthcare machine learning applications by merging accuracy with explainability.
- Generative Design for Manufacturing: Integrating Generation with Optimization Using a Guided Voxel Diffusion ModelSong, Binyang; Chilukuri, Premith Kumar; Kang, Sungku; Jin, Ran (2024)In digital manufacturing, converting advanced designs into quality products is hampered by manufacturers' limited design knowledge, restricting the adoption and enhancement of innovative solutions. This paper addresses this challenge through a novel generative denoising diffusion model (DDM) trained on historical 3D design data, enabling the creation of voxel-based designs that meet manufacturing standards. By integrating a surrogate model for evaluating the manufacturability of generated designs, the proposed DDM is able to optimize manufacturability during the generative process. This paper takes a leap forward from the predominant 2D focus of existing generative models towards 3D generative design, which not only broadens manufacturers' design capabilities but also accelerates the development of practical and optimized products. We demonstrate the efficacy of this approach via a case study on Microbial Fuel Cell (MFC) anode design, illustrating how this method can significantly enhance manufacturing workflows and outcomes. Our research offers a path for manufacturers to deepen their design expertise and foster innovation in digital manufacturing.
- GLR Control Charts for Monitoring Correlated Binary ProcessesWang, Ning (Virginia Tech, 2013-12-27)When monitoring a binary process proportion p, it is usually assumed that the binary observations are independent. However, it is very common that the observations are correlated with p being the correlation between two successive observations. The first part of this research investigates the problem of monitoring p when the binary observations follow a first-order two-state Markov chain model with p remaining unchanged. A Markov Binary GLR (MBGLR) chart with an upper bound on the estimate of p is proposed to monitor a continuous stream of autocorrelated binary observations treating each observation as a sample of size n=1. The MBGLR chart with a large upper bound has good overall performance over a wide range of shifts. The MBGLR chart is optimized using the extra number of defectives (END) over a range of upper bounds for the MLE of p. The numerical results show that the optimized MBGLR chart has a smaller END than the optimized Markov binary CUSUM. The second part of this research develops a CUSUM-pp chart and a GLR-pp chart to monitor p and p simultaneously. The CUSUM-pp with two tuning parameters is designed to detect shifts in p and p when the shifted values are known. We apply two CUSUM-pp charts as a chart combination to detect increases in p and increases or decreases in p. The GLR-pp chart with an upper bound on the estimate of p, and an upper bound and a lower bound on the estimate of p works well when the shifts are unknown. We find that the GLR-pp chart has better overall performance. The last part of this research investigates the problem of monitoring p with p remains at the target value when the correlated binary observations are aggregated into samples with n>1. We assume that samples are independent and there is correlation between the observations in a sample. We proposed some GLR and CUSUM charts to monitor p and the performance of the charts are compared. The simulation results show MBNGLR has overall better performance than the other charts.
- Heterogeneous Sensor Data based Online Quality Assurance for Advanced Manufacturing using Spatiotemporal ModelingLiu, Jia (Virginia Tech, 2017-08-21)Online quality assurance is crucial for elevating product quality and boosting process productivity in advanced manufacturing. However, the inherent complexity of advanced manufacturing, including nonlinear process dynamics, multiple process attributes, and low signal/noise ratio, poses severe challenges for both maintaining stable process operations and establishing efficacious online quality assurance schemes. To address these challenges, four different advanced manufacturing processes, namely, fused filament fabrication (FFF), binder jetting, chemical mechanical planarization (CMP), and the slicing process in wafer production, are investigated in this dissertation for applications of online quality assurance, with utilization of various sensors, such as thermocouples, infrared temperature sensors, accelerometers, etc. The overarching goal of this dissertation is to develop innovative integrated methodologies tailored for these individual manufacturing processes but addressing their common challenges to achieve satisfying performance in online quality assurance based on heterogeneous sensor data. Specifically, three new methodologies are created and validated using actual sensor data, namely, (1) Real-time process monitoring methods using Dirichlet process (DP) mixture model for timely detection of process changes and identification of different process states for FFF and CMP. The proposed methodology is capable of tackling non-Gaussian data from heterogeneous sensors in these advanced manufacturing processes for successful online quality assurance. (2) Spatial Dirichlet process (SDP) for modeling complex multimodal wafer thickness profiles and exploring their clustering effects. The SDP-based statistical control scheme can effectively detect out-of-control wafers and achieve wafer thickness quality assurance for the slicing process with high accuracy. (3) Augmented spatiotemporal log Gaussian Cox process (AST-LGCP) quantifying the spatiotemporal evolution of porosity in binder jetting parts, capable of predicting high-risk areas on consecutive layers. This work fills the long-standing research gap of lacking rigorous layer-wise porosity quantification for parts made by additive manufacturing (AM), and provides the basis for facilitating corrective actions for product quality improvements in a prognostic way. These developed methodologies surmount some common challenges of advanced manufacturing which paralyze traditional methods in online quality assurance, and embody key components for implementing effective online quality assurance with various sensor data. There is a promising potential to extend them to other manufacturing processes in the future.
- Hierarchical Bayesian Dataset SelectionZhou, Xiaona (Virginia Tech, 2024-05)Despite the profound impact of deep learning across various domains, supervised model training critically depends on access to large, high-quality datasets, which are often challenging to identify. To address this, we introduce Hierarchical Bayesian Dataset Selection (HBDS), the first dataset selection algorithm that utilizes hierarchical Bayesian modeling, designed for collaborative data-sharing ecosystems. The proposed method efficiently decomposes the contributions of dataset groups and individual datasets to local model performance using Bayesian updates with small data samples. Our experiments on two benchmark datasets demonstrate that HBDS not only offers a computationally lightweight solution but also enhances interpretability compared to existing data selection methods, by revealing deep insights into dataset interrelationships through learned posterior distributions. HBDS outperforms traditional non-hierarchical methods by correctly identifying all relevant datasets, achieving optimal accuracy with fewer computational steps, even when initial model accuracy is low. Specifically, HBDS surpasses its non-hierarchical counterpart by 1.8% on DIGIT-FIVE and 0.7% on DOMAINNET, on average. In settings with limited resources, HBDS achieves a 6.9% higher accuracy than its non-hierarchical counterpart. These results confirm HBDS's effectiveness in identifying datasets that improve the accuracy and efficiency of deep learning models when collaborative data utilization is essential.
- Improving Assessment in Kidney Transplantation by Multitask General Path ModelLan, Qing; Chen, Xiaoyu; Li, Murong; Robertson, John; Lei, Yong; Jin, Ran (2023)Kidney transplantation helps end-stage patients regain health and quality-of-life. The decisions for matching donor kidneys and recipients affect success of transplantation. However, current kidney matching decision procedures do not consider viability loss during preservation. The objective here is to forecast heterogeneous kidney viability, based on historical datasets to support kidney matching decision-making. Six recently procured porcine kidneys were used to conduct viability assessment experiments to validate the proposed multitask general path model. The model forecasts kidney viability by transferring knowledge from learning the commonality of all kidneys and the heterogeneity of each kidney. The proposed model provides exactly accurate kidney viability forecasting results compared to the state-of-the-art models including a multitask learning model, a general path model, and a general linear model. The proposed model provides satisfactory kidney viability forecasting accuracy because it quantifies the degradation information from trajectory of a viability loss path. It transfers knowledge of common effects from all kidneys and identifies individual effects of each kidney. This method can be readily extended to other decision-making scenarios in kidney transplantation to improve overall assessment performance. For example, analytical generalizations gained by modeling have been validated based on needle biopsy data targeting the improvement of tissue extraction accuracy. The proposed model applied in multiple kidney assessment processes in transplantation can potentially reduce the kidney discard rate by providing effective kidney matching decisions. Thus, the increased kidney utilization rate will benefit more patients and prolong their lives.