Browsing by Author "Wang, Shih-Han"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
- Infusing theory into deep learning for interpretable reactivity predictionWang, Shih-Han; Pillai, Hemanth Somarajan; Wang, Siwen; Achenie, Luke E. K.; Xin, Hongliang (Nature Research, 2021)Despite recent advances of data acquisition and algorithms development, machine learning (ML) faces tremendous challenges to being adopted in practical catalyst design, largely due to its limited generalizability and poor explainability. Herein, we develop a theory-infused neural network (TinNet) approach that integrates deep learning algorithms with the wellestablished d-band theory of chemisorption for reactivity prediction of transition-metal surfaces. With simple adsorbates (e.g., *OH, *O, and *N) at active site ensembles as representative descriptor species, we demonstrate that the TinNet is on par with purely data-driven ML methods in prediction performance while being inherently interpretable. Incorporation of scientific knowledge of physical interactions into learning from data sheds further light on the nature of chemical bonding and opens up new avenues for ML discovery of novel motifs with desired catalytic properties.
- Interpretable Machine Learning of Chemical Bonding at Solid SurfacesOmidvar, Noushin; Pillai, Hemanth Somarajan; Wang, Shih-Han; Mou, Tianyou; Wang, Siwen; Athawale, Andy; Achenie, Luke E. K.; Xin, Hongliang (American Chemical Society, 2021-11-25)Understanding the nature of chemical bonding and its variation in strength across physically tunable factors is important for the development of novel catalytic materials. One way to speed up this process is to employ machine learning (ML) algorithms with online data repositories curated from high-throughput experiments or quantum-chemical simulations. Despite the reasonable predictive performance of ML models for predicting reactivity properties of solid surfaces, the ever-growing complexity of modern algorithms, e.g., deep learning, makes them black boxes with little to no explanation. In this Perspective, we discuss recent advances of interpretable ML for opening up these black boxes from the standpoints of feature engineering, algorithm development, and post hoc analysis. We underline the pivotal role of interpretability as the foundation of next-generation ML algorithms and emerging AI platforms for driving discoveries across scientific disciplines.