Interpretable ML Model for Quality Control of Locks Using Counterfactual Explanations
2024 (English)In: Proc. - Int. Conf. Innov. Dev. Inf. Technol. Robot., IDITR, Institute of Electrical and Electronics Engineers Inc. , 2024, p. 161-166Conference paper, Published paper (Refereed)
Abstract [en]
This paper presents an interpretable machine-learning model for anomaly detection in door locks using torque data. The model aims to replace the human tactile sense in the quality control process, reducing repetitive tasks and improving reliability. The model achieved an accuracy of 96%, however, to gain social acceptance and operators' trust, interpretability of the model is crucial. The purpose of this study was to evaluate an approach that can improve interpretability of anomalous classifications obtained from an anomaly detection model. We evaluate four instance-based counterfactual explanators, three of which, employ optimization techniques and one uses, a less complex, weighted nearest neighbor approach, which serve as our baseline. The former approaches, leverage a latent representation of the data, using a weighted principal component analysis, improving plausibility of the counter factual explanations and reduces computational cost. The explanations are presented together with the 5-50-95th percentile range of the training data, acting as a frame of reference to improve interpretability. All approaches successfully presented valid and plausible counterfactual explanations. However, instance-based approaches employing optimization techniques yielded explanations with greater similarity to the observations and was therefore concluded to be preferable despite the higher execution times (4-16s) compared to the baseline approach (0.1s). The findings of this study hold significant value for the lock industry and can potentially be extended to other industrial settings using timeseries data, serving as a valuable point of departure for further research.
Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers Inc. , 2024. p. 161-166
Keywords [en]
Anomaly detection, Counterfactual explanation, Explainable artificial intelligence, Principal component analysis, Artificial intelligence, Industrial research, Locks (fasteners), Quality control, Counterfactuals, Interpretability, Machine learning models, Optimization techniques, Principal-component analysis, Tactile sense, Torque data
National Category
Computer and Information Sciences
Identifiers
URN: urn:nbn:se:mdh:diva-69541DOI: 10.1109/IDITR62018.2024.10554297Scopus ID: 2-s2.0-85197291244ISBN: 9798350385694 (print)OAI: oai:DiVA.org:mdh-69541DiVA, id: diva2:1920838
Conference
Proceedings - 2024 3rd International Conference on Innovations and Development of Information Technologies and Robotics, IDITR 2024
2024-12-122024-12-122025-10-10Bibliographically approved