A Measure of Confidence of Artificial Neural Network Classifiers
Conference: ARCS Workshop 2019 - 32nd International Conference on Architecture of Computing Systems
05/20/2019 - 05/21/2019 at Copenhagen, Denmark
Proceedings: ARCS Workshop 2019
Pages: 5Language: englishTyp: PDF
Personal VDE Members are entitled to a 10% discount on this title
Authors:
Gschossmann, Andreas; Jobst, Simon; Mottok, Juergen; Bierl, Rudolf (Ostbayerische Technische Hochschule, Regensburg, Regensburg, Germany)
Abstract:
Confidence in results of an Artificial Neural Networks (ANNs) is increased by preferring to reject data, that is not trustful, instead of risking a misclassification. For this purpose a model is proposed that is able to recognize data, which differs significantly from the training data, during inference. The proposed model observes all activations of the hidden layers, as well as input and output layers of an ANN in a grey-box view. To make ANNs more robust in safety critical applications, this model can be used to reject flawed data, that is suspected to decrease the accuracy of the model. If this information is logged during inference, it can be used to improve the model, by training it specifically with the missing information. An experiment on the MNIST dataset is conducted and its results are discussed.