SUPPORTING DNN SAFETY ANALYSIS AND RETRAINING USING UNSUPERVISED LEARNING
Prof. Sneha A. Khaire Sneha A. Khaire
Paper Contents
Abstract
The development of deep neural networks (DNNs) has greatly advanced the field of artificial intelligence, but their use in safety-critical applications such as autonomous driving, medical diagnosis, and financial forecasting requires rigorous analysis and verification to ensure their reliability and trustworthiness. In this context, unsupervised learning has emerged as a promising technique for supporting DNN safety analysis and retraining, by enabling the detection of anomalies, errors, and biases in the input data, as well as the identification of data-driven features and representations that can enhance the generalization and robustness of the model. This paper presents an overview of the recent research on unsupervised learning methods for DNN safety, including autoencoders, generative models, clustering, and outlier detection, and their applications in detecting adversarial attacks, handling missing data, improving fault tolerance, and mitigating dataset bias. We also discuss the challenges and opportunities of incorporating unsupervised learning into the DNN development pipeline and highlight the need for further research and standardization to ensure the scalability, interpretability, and reproducibility of these methods.
Copyright
Copyright © 2023 Prof. Sneha A. Khaire. This is an open access article distributed under the Creative Commons Attribution License.