INN: An Interpretable Neural Network for AI Incubation in Manufacturing
dc.contributor.author | Chen, Xiaoyu | en |
dc.contributor.author | Zeng, Yingyan | en |
dc.contributor.author | Kang, Sungku | en |
dc.contributor.author | Jin, Ran | en |
dc.date.accessioned | 2022-09-29T12:01:21Z | en |
dc.date.available | 2022-09-29T12:01:21Z | en |
dc.date.issued | 2022-06-21 | en |
dc.date.updated | 2022-09-27T20:06:46Z | en |
dc.description.abstract | Both artificial intelligence (AI) and domain knowledge from human experts play an important role in manufacturing decision-making. While smart manufacturing emphasizes a fully automated data-driven decision-making, the AI incubation process involves human experts to enhance AI systems by integrating domain knowledge for modeling, data collection and annotation, and feature extraction. Such an AI incubation process will not only enhance the domain knowledge discovery, but also improve the interpretability and trustworthiness of AI methods. In this paper, we focus on the knowledge transfer from human experts to a supervised learning problem by learning domain knowledge as interpretable features and rules, which can be used to construct rule-based systems to support manufacturing decision-making, such as process modeling and quality inspection. Although many advanced statistical and machine learning methods have shown promising modeling accuracy and efficiency, rule-based systems are still highly preferred and widely adopted due to their interpretability for human experts to comprehend. However, most of the existing rule-based systems are constructed based on deterministic human-crafted rules, whose parameters, e.g., thresholds of decision rules, are suboptimal. On the other hand, the machine learning methods, such as tree models or neural networks, can learn a decision-rule based structure without much interpretation or agreement with domain knowledge. Therefore, the traditional machine learning models and human experts' domain knowledge cannot be directly improved by learning from data. In this research, we propose an interpretable neural network (INN) model with a center-adjustable Sigmoid activation function to efficiently optimize the rule-based systems. Using the rule-based system from domain knowledge to regulate the INN architecture will not only improve the prediction accuracy with optimized parameters, but also ensure the interpretability by adopting the interpretable rule-based systems from domain knowledge. The proposed INN will be effective for supervised learning problems when rule-based systems are available. The merits of INN model are demonstrated via a simulation study and a real case study in the quality modeling of a semiconductor manufacturing process. The source code of this paper is hosted here https://github.com/XiaoyuChenUofL/Interpretable-Neural-Network. | en |
dc.description.version | Published version | en |
dc.format.mimetype | application/pdf | en |
dc.identifier.citation | Xiaoyu Chen, Yingyan Zeng, Sungku Kang, and Ran Jin. 2022. INN: An Interpretable Neural Network for AI Incubation in Manufacturing. ACM Trans. Intell. Syst. Technol. 13, 5, Article 85 (June 2022), 23 pages. https://doi.org/10.1145/3519313 | en |
dc.identifier.doi | https://doi.org/10.1145/3519313 | en |
dc.identifier.issue | 5 | en |
dc.identifier.uri | http://hdl.handle.net/10919/112024 | en |
dc.identifier.volume | 13 | en |
dc.language.iso | en | en |
dc.publisher | ACM | en |
dc.rights | In Copyright | en |
dc.rights.holder | ACM | en |
dc.rights.uri | http://rightsstatements.org/vocab/InC/1.0/ | en |
dc.title | INN: An Interpretable Neural Network for AI Incubation in Manufacturing | en |
dc.title.serial | ACM Transactions on Intelligent Systems and Technology | en |
dc.type | Article - Refereed | en |
dc.type.dcmitype | Text | en |