Measuring Robustness of Classifiers to Geometric TransformationsReportar como inadecuado




Measuring Robustness of Classifiers to Geometric Transformations - Descarga este documento en PDF. Documentación en PDF para descargar gratis. Disponible también para leer online.

Publication date: 2017

For many classification tasks, the ideal classifier should be invariant to geometric transformations such as changing the view angle. However, this cannot be said decisively for the state-of-the-art image classifiers, such as convolutional neural networks. Mainly, this is because there is a lack of methods for measuring the transformation invariance in them, especially for transformations with higher dimensions. In this project, we are proposing two algorithms to do such measurement. The first one, Manifool, uses the structure of the image appearance manifold for finding small enough transformation examples and uses these to compute the invariance of the classifier. Second one, the iterative projection algorithm, uses adversarial perturbation methods in neural networks to find the fooling examples in the given transformation set. We compare these methods to similar algorithms in the areas of speed and validity, and use them to show that transformation invariance increases with the depth of the neural networks, even in reasonably deep networks. Overall, we believe that these two algorithms can be used for analysis of different architectures and can help to build more robust classifiers.

Keywords: neural networks ; geometric transformations Reference EPFL-STUDENT-230235





Autor: Kanbak, CanAdvisor: Frossard, Pascal

Fuente: https://infoscience.epfl.ch/record/230235?ln=en







Documentos relacionados