Access the full text.
Sign up today, get DeepDyve free for 14 days.
Wieland Brendel, Jonas Rauber, M. Bethge (2017)
Decision-Based Adversarial Attacks: Reliable Attacks Against Black-Box Machine Learning ModelsArXiv, abs/1712.04248
K. Zhang, W. Zuo, Yunjin Chen, Deyu Meng, Lei Zhang (2016)
Beyond a Gaussian Denoiser: Residual Learning of Deep CNN for Image DenoisingIEEE Transactions on Image Processing, 26
Nicolas Papernot, P. Mcdaniel, I. Goodfellow (2016)
Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial SamplesArXiv, abs/1605.07277
Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, D. Erhan, I. Goodfellow, R. Fergus (2013)
Intriguing properties of neural networksCoRR, abs/1312.6199
Fangzhou Liao, Ming Liang, Yinpeng Dong, Tianyu Pang, Jun Zhu, Xiaolin Hu (2017)
Defense Against Adversarial Attacks Using High-Level Representation Guided Denoiser2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition
Christian Szegedy, Sergey Ioffe, Vincent Vanhoucke, Alexander Alemi (2016)
Inception-v4, Inception-ResNet and the Impact of Residual Connections on LearningArXiv, abs/1602.07261
A. Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, Adrian Vladu (2017)
Towards Deep Learning Models Resistant to Adversarial AttacksArXiv, abs/1706.06083
Pascal Vincent, H. Larochelle, Yoshua Bengio, Pierre-Antoine Manzagol (2008)
Extracting and composing robust features with denoising autoencoders
Jacob Buckman, Aurko Roy, Colin Raffel, I. Goodfellow (2018)
Thermometer Encoding: One Hot Way To Resist Adversarial Examples
Cihang Xie, Jianyu Wang, Zhishuai Zhang, Zhou Ren, A. Yuille (2017)
Mitigating adversarial effects through randomizationArXiv, abs/1711.01991
J. Metzen, Tim Genewein, Volker Fischer, Bastian Bischoff (2017)
On Detecting Adversarial PerturbationsArXiv, abs/1702.04267
Yanpei Liu, Xinyun Chen, Chang Liu, D. Song (2016)
Delving into Transferable Adversarial Examples and Black-box AttacksArXiv, abs/1611.02770
Nicholas Carlini, D. Wagner (2017)
Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection MethodsProceedings of the 10th ACM Workshop on Artificial Intelligence and Security
Pin-Yu Chen, Huan Zhang, Yash Sharma, Jinfeng Yi, Cho-Jui Hsieh (2017)
ZOO: Zeroth Order Optimization Based Black-box Attacks to Deep Neural Networks without Training Substitute ModelsProceedings of the 10th ACM Workshop on Artificial Intelligence and Security
J. Korczak, S. Brant (1999)
Optimization and global minimization methods suitable for neural networks
Diederik Kingma, Jimmy Ba (2014)
Adam: A Method for Stochastic OptimizationCoRR, abs/1412.6980
François Chollet (2016)
Xception: Deep Learning with Depthwise Separable Convolutions2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
I. Goodfellow, Jonathon Shlens, Christian Szegedy (2014)
Explaining and Harnessing Adversarial ExamplesCoRR, abs/1412.6572
Ruitong Huang, Bing Xu, Dale Schuurmans, Csaba Szepesvari (2015)
Learning with a Strong AdversaryArXiv, abs/1511.03034
W. Marsden (2012)
I and J
Florian Tramèr, Alexey Kurakin, Nicolas Papernot, D. Boneh, P. Mcdaniel (2017)
Ensemble Adversarial Training: Attacks and DefensesArXiv, abs/1705.07204
Ilya Sutskever, James Martens, George Dahl, Geoffrey Hinton (2013)
On the importance of initialization and momentum in deep learning
S. Baluja, Ian Fischer (2017)
Adversarial Transformation Networks: Learning to Generate Adversarial ExamplesArXiv, abs/1703.09387
K. Barraclough (2001)
I and iBMJ : British Medical Journal, 323
Geoffrey Hinton, S. Sabour, Nicholas Frosst (2018)
Matrix capsules with EM routing
Boris Polyak (1964)
Some methods of speeding up the convergence of iteration methodsUssr Computational Mathematics and Mathematical Physics, 4
Nilaksh Das, Madhuri Shanbhogue, Shang-Tse Chen, Fred Hohman, Li Chen, M. Kounavis, Duen Chau (2017)
Keeping the Bad Guys Out: Protecting and Vaccinating Deep Learning with JPEG CompressionArXiv, abs/1705.02900
B. Biggio, Igino Corona, Davide Maiorca, B. Nelson, Nedim Srndic, P. Laskov, G. Giacinto, F. Roli (2013)
Evasion Attacks against Machine Learning at Test TimeArXiv, abs/1708.06131
Weilin Xu, David Evans, Yanjun Qi (2017)
Feature Squeezing: Detecting Adversarial Examples in Deep Neural NetworksArXiv, abs/1704.01155
Nicholas Carlini, D. Wagner (2016)
Towards Evaluating the Robustness of Neural Networks2017 IEEE Symposium on Security and Privacy (SP)
Alexey Kurakin, I. Goodfellow, Samy Bengio (2016)
Adversarial examples in the physical worldArXiv, abs/1607.02533
Jia Deng, Wei Dong, R. Socher, Li-Jia Li, K. Li, Li Fei-Fei (2009)
ImageNet: A large-scale hierarchical image database2009 IEEE Conference on Computer Vision and Pattern Recognition
Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, Z. Wojna (2015)
Rethinking the Inception Architecture for Computer Vision2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
Olga Russakovsky, Jia Deng, Hao Su, J. Krause, S. Satheesh, Sean Ma, Zhiheng Huang, A. Karpathy, A. Khosla, Michael Bernstein, A. Berg, Li Fei-Fei (2014)
ImageNet Large Scale Visual Recognition ChallengeInternational Journal of Computer Vision, 115
Kaiming He, X. Zhang, Shaoqing Ren, Jian Sun (2015)
Deep Residual Learning for Image Recognition2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
Kaiming He, X. Zhang, Shaoqing Ren, Jian Sun (2016)
Identity Mappings in Deep Residual Networks
Yinpeng Dong, Fangzhou Liao, Tianyu Pang, Hang Su, Jun Zhu, Xiaolin Hu, Jianguo Li (2017)
Boosting Adversarial Attacks with Momentum2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition
Alexey Kurakin, I. Goodfellow, Samy Bengio (2016)
Adversarial Machine Learning at ScaleArXiv, abs/1611.01236
and as an in
Mahmood Sharif, Sruti Bhagavatula, Lujo Bauer, M. Reiter (2016)
Accessorize to a Crime: Real and Stealthy Attacks on State-of-the-Art Face RecognitionProceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security
(2018)
E Hinton, Sara Sabour
J. Gilmer, Luke Metz, Fartash Faghri, S. Schoenholz, M. Raghu, M. Wattenberg, I. Goodfellow (2018)
Adversarial SpheresArXiv, abs/1801.02774
Nicolas Papernot, P. Mcdaniel, I. Goodfellow, S. Jha, Z. Celik, A. Swami (2016)
Practical Black-Box Attacks against Machine LearningProceedings of the 2017 ACM on Asia Conference on Computer and Communications Security
Warren He, James Wei, Xinyun Chen, Nicholas Carlini, D. Song (2017)
Adversarial Example Defense: Ensembles of Weak Defenses are not Strong
[To accelerate research on adversarial examples and robustness of machine learning classifiers, Google Brain organized a NIPS 2017 competition that encouraged researchers to develop new methods to generate adversarial examples as well as to develop new ways to defend against them. In this chapter, we describe the structure and organization of the competition and the solutions developed by several of the top-placing teams.]
Published: Sep 28, 2018
Keywords: Adversarial Examples; Adversarial Training; Gradient Masking; Adversarial Images; Worst-case Scores
Read and print from thousands of top scholarly journals.
Already have an account? Log in
Bookmark this article. You can see your Bookmarks on your DeepDyve Library.
To save an article, log in first, or sign up for a DeepDyve account if you don’t already have one.
Copy and paste the desired citation format or use the link below to download a file formatted for EndNote
Access the full text.
Sign up today, get DeepDyve free for 14 days.
All DeepDyve websites use cookies to improve your online experience. They were placed on your computer when you launched this website. You can change your cookie settings through your browser.