Robustness of Sketched Linear Classifiers to Adversarial Attacks

Research output: Chapter in Book/Report/Conference proceedingConference contributionScientificpeer-review


Linear classifiers are well-known to be vulnerable to adversarial attacks: they may predict incorrect labels for input data that are adversarially modified with small perturbations. However, this phenomenon has not been properly understood in the context of sketch-based linear classifiers, typically used in memory-constrained paradigms, which rely on random projections of the features for model compression. In this paper, we propose novel Fast-Gradient-Sign Method (FGSM) attacks for sketched classifiers in full, partial, and black-box information settings with regards to their internal parameters. We perform extensive experiments on the MNIST dataset to characterize their robustness as a function of perturbation budget. Our results suggest that, in the full-information setting, these classifiers are less accurate on unaltered input than their uncompressed counterparts but just as susceptible to adversarial attacks. But in more realistic partial and black-box information settings, sketching improves robustness while having lower memory footprint.
Original languageEnglish
Title of host publicationInternational Conference on Information and Knowledge Management (CIKM)
Number of pages5
PublisherAssociation for Computing Machinery
Publication dateOct 2022
ISBN (Electronic)9781450392365
Publication statusPublished - Oct 2022
MoE publication typeA4 Article in conference proceedings
EventInternational Conference on Information and Knowledge Management - Atlanta, United States
Duration: 17 Oct 202221 Oct 2022
Conference number: 31

Fields of Science

  • 113 Computer and information sciences

Cite this