2023 AIChE Annual Meeting
(509c) Graph Attention Based Self-Explanatory Fault Diagnosis in Chemical Process
This work proposes a graph attention network (GAT)-based algorithm for fault diagnosis [3]. In continuous chemical processes, process variables associated with materials are transported along the process flow and control logic of the process variables is transferred through control loops [4]. This direction of propagation resembles a message passing path in a graph neural network (GNN), and motivated by this idea, we designed a directed graph in which each process variable represents a node, and streams and control loops represent edges. During the training of the proposed model, information about the variables is passed along the directed edges, which discovers the causality between them. Contrary to naïve GNNs, the GATâs attention mechanism learns the information of neighboring nodes with varying importance. In chemical processes, there is high probability that neighboring features are of varying importance to the target feature, and thus the attention mechanism can detect their differences. In addition, by observing the attention score, root cause and causality can be interpreted directly, without the need for additional explainable AI to be conducted on the model.
The Tennessee Eastman Process (TEP) is utilized in this study. The process flow diagram of TEP is represented by a graph containing 57 nodes and 108 directed edges. Graph convolutional network (GCN), which assigns equal weight to neighboring nodes in convolution, is used as the control group for the proposed model [5]. As a result, our model outperformed GCN by 7.5% in terms of classification accuracy, which suggests that learning the varying attentions between neighboring feature variables is crucial, and this is carefully examined via entropy histograms of the two models. In addition, a major strength of the attention mechanism is that it shows robust performance against lengthy input sequences by multi-head parallel computing. We found out that as the length of the input sequence increased-by 10 to 20 folds, GAT outperformed the long short-term memory model by 12% in terms of accuracy. Moreover, when examining the attention heatmap after model training, the edges of known root causes of each fault scenario had higher attention scores for each case. Consequently, the attention heatmap can be used as a guide for fault interpretation, as the model paid greater attention to these nodes and edges under the faulty condition. Lastly, through augmenting various graph structures and node features, graph contrastive learning of our model improved the classification accuracy of fault 3, 9, and 15 of TEP, which are notoriously difficult to classify due to their high similarity to the normal scenario [6].
References
[1] Gharahbagheri, H., Imtiaz, S.A. and Khan, F., 2017. Root cause diagnosis of process fault using KPCA and Bayesian network. Industrial & Engineering Chemistry Research, 56(8), pp.2054-2070.
[2] Harinarayan, R.R.A. and Shalinie, S.M., 2022. XFDDC: eXplainable Fault Detection Diagnosis and Correction framework for chemical process systems. Process Safety and Environmental Protection, 165, pp.463-474.
[3] VeliÄkoviÄ, P., Cucurull, G., Casanova, A., Romero, A., Lio, P. and Bengio, Y., 2017. Graph attention networks. arXiv preprint arXiv:1710.10903.
[4] Bauer, M., Cox, J.W., Caveness, M.H., Downs, J.J. and Thornhill, N.F., 2006. Finding the direction of disturbance propagation in a chemical process using transfer entropy. IEEE transactions on control systems technology, 15(1), pp.12-21.
[5] Kipf, T.N. and Welling, M., 2016. Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907
[6] You, Y., Chen, T., Sui, Y., Chen, T., Wang, Z. and Shen, Y., 2020. Graph contrastive learning with augmentations. Advances in neural information processing systems, 33, pp.5812-5823.