Skip to main content
Publication

DeepAdversaries: Examining the Robustness of Deep Learning Models for Galaxy Morphology Classification

Authors

Ciprijanovic, Aleksandra; Kafkes, Diana; Snyder, Gregory; Sanchez, F. Javier; Perdue, Gabriel; Pedro, Kevin; Nord, Brian; Madireddy, Sandeep; Wild, Stefan

Abstract

With increased adoption of supervised deep learning methods for work with cosmological survey data, the assessment of data perturbation effects (that can naturally occur in the data processing and analysis pipelines) and the development of methods that increase model robustness are increasingly important. In the context of morphological classification of galaxies, we study the effects of perturbations in imaging data. In particular, we examine the consequences of using neural networks when training on baseline data and testing on perturbed data. We consider perturbations associated with two primary sources: (a) increased observational noise as represented by higher levels of Poisson noise and (b) data processing noise incurred by steps such as image compression or telescope errors as represented by one-pixel adversarial attacks. We also test the efficacy of domain adaptation techniques in mitigating the perturbation-driven errors. We use classification accuracy, latent space visualizations, and latent space distance to assess model robustness in the face of these perturbations. For deep learning models without domain adaptation, we find that processing pixel-level errors easily flip the classification into an incorrect class and that higher observational noise makes the model trained on low-noise data unable to classify galaxy morphologies. On the other hand, we show that training with domain adaptation improves model robustness and mitigates the effects of these perturbations, improving the classification accuracy up to 23% on data with higher observational noise. Domain adaptation also increases up to a factor of approximate to 2.3