INN: An Interpretable Neural Network for AI Incubation in Manufacturing

dc.contributor.authorChen, Xiaoyuen
dc.contributor.authorZeng, Yingyanen
dc.contributor.authorKang, Sungkuen
dc.contributor.authorJin, Ranen
dc.date.accessioned2022-09-29T12:01:21Zen
dc.date.available2022-09-29T12:01:21Zen
dc.date.issued2022-06-21en
dc.date.updated2022-09-27T20:06:46Zen
dc.description.abstractBoth artificial intelligence (AI) and domain knowledge from human experts play an important role in manufacturing decision-making. While smart manufacturing emphasizes a fully automated data-driven decision-making, the AI incubation process involves human experts to enhance AI systems by integrating domain knowledge for modeling, data collection and annotation, and feature extraction. Such an AI incubation process will not only enhance the domain knowledge discovery, but also improve the interpretability and trustworthiness of AI methods. In this paper, we focus on the knowledge transfer from human experts to a supervised learning problem by learning domain knowledge as interpretable features and rules, which can be used to construct rule-based systems to support manufacturing decision-making, such as process modeling and quality inspection. Although many advanced statistical and machine learning methods have shown promising modeling accuracy and efficiency, rule-based systems are still highly preferred and widely adopted due to their interpretability for human experts to comprehend. However, most of the existing rule-based systems are constructed based on deterministic human-crafted rules, whose parameters, e.g., thresholds of decision rules, are suboptimal. On the other hand, the machine learning methods, such as tree models or neural networks, can learn a decision-rule based structure without much interpretation or agreement with domain knowledge. Therefore, the traditional machine learning models and human experts' domain knowledge cannot be directly improved by learning from data. In this research, we propose an interpretable neural network (INN) model with a center-adjustable Sigmoid activation function to efficiently optimize the rule-based systems. Using the rule-based system from domain knowledge to regulate the INN architecture will not only improve the prediction accuracy with optimized parameters, but also ensure the interpretability by adopting the interpretable rule-based systems from domain knowledge. The proposed INN will be effective for supervised learning problems when rule-based systems are available. The merits of INN model are demonstrated via a simulation study and a real case study in the quality modeling of a semiconductor manufacturing process. The source code of this paper is hosted here https://github.com/XiaoyuChenUofL/Interpretable-Neural-Network.en
dc.description.versionPublished versionen
dc.format.mimetypeapplication/pdfen
dc.identifier.citationXiaoyu Chen, Yingyan Zeng, Sungku Kang, and Ran Jin. 2022. INN: An Interpretable Neural Network for AI Incubation in Manufacturing. ACM Trans. Intell. Syst. Technol. 13, 5, Article 85 (June 2022), 23 pages. https://doi.org/10.1145/3519313en
dc.identifier.doihttps://doi.org/10.1145/3519313en
dc.identifier.issue5en
dc.identifier.urihttp://hdl.handle.net/10919/112024en
dc.identifier.volume13en
dc.language.isoenen
dc.publisherACMen
dc.rightsIn Copyrighten
dc.rights.holderACMen
dc.rights.urihttp://rightsstatements.org/vocab/InC/1.0/en
dc.titleINN: An Interpretable Neural Network for AI Incubation in Manufacturingen
dc.title.serialACM Transactions on Intelligent Systems and Technologyen
dc.typeArticle - Refereeden
dc.type.dcmitypeTexten

Files

Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
3519313.pdf
Size:
3.3 MB
Format:
Adobe Portable Document Format
Description:
Published version
License bundle
Now showing 1 - 1 of 1
Name:
license.txt
Size:
0 B
Format:
Item-specific license agreed upon to submission
Description: