Robustness of Sketched Linear Classifiers to Adversarial Attacks

Tutkimustuotos: Artikkeli kirjassa/raportissa/konferenssijulkaisussaKonferenssiartikkeliTieteellinenvertaisarvioitu

Abstrakti

Linear classifiers are well-known to be vulnerable to adversarial attacks: they may predict incorrect labels for input data that are adversarially modified with small perturbations. However, this phenomenon has not been properly understood in the context of sketch-based linear classifiers, typically used in memory-constrained paradigms, which rely on random projections of the features for model compression. In this paper, we propose novel Fast-Gradient-Sign Method (FGSM) attacks for sketched classifiers in full, partial, and black-box information settings with regards to their internal parameters. We perform extensive experiments on the MNIST dataset to characterize their robustness as a function of perturbation budget. Our results suggest that, in the full-information setting, these classifiers are less accurate on unaltered input than their uncompressed counterparts but just as susceptible to adversarial attacks. But in more realistic partial and black-box information settings, sketching improves robustness while having lower memory footprint.
Alkuperäiskielienglanti
OtsikkoInternational Conference on Information and Knowledge Management (CIKM)
Sivumäärä5
KustantajaAssociation for Computing Machinery
Julkaisupäivälokak. 2022
Sivut4319-4323
ISBN (elektroninen)9781450392365
DOI - pysyväislinkit
TilaJulkaistu - lokak. 2022
OKM-julkaisutyyppiA4 Artikkeli konferenssijulkaisuussa
TapahtumaInternational Conference on Information and Knowledge Management - Atlanta, Yhdysvallat (USA)
Kesto: 17 lokak. 202221 lokak. 2022
Konferenssinumero: 31

Tieteenalat

  • 113 Tietojenkäsittely- ja informaatiotieteet

Siteeraa tätä