A robust and engaging account of the single greatest threat faced by AI and ML systems
In Not With A Bug, But With A Sticker: Attacks on Machine Learning Systems and What To Do About Them, a team of distinguished adversarial machine learning researchers deliver a riveting account of the most significant risk to currently deployed artificial intelligence systems: cybersecurity threats. The authors take you on a sweeping tour – from inside secretive government organizations to academic workshops at ski chalets to Google’s cafeteria – recounting how major AI systems remain vulnerable to the exploits of bad actors of all stripes.
Based on hundreds of interviews of academic researchers, policy makers, business leaders and national security experts, the authors compile the complex science of attacking AI systems with color and flourish and provide a front row seat to those who championed this change. Grounded in real world examples of previous attacks, you will learn how adversaries can upend the reliability of otherwise robust AI systems with straightforward exploits.
The steeplechase to solve this problem has already begun: Nations and organizations are aware that securing AI systems brings forth an indomitable advantage: the prize is not just to keep AI systems safe but also the ability to disrupt the competition’s AI systems.
An essential and eye-opening resource for machine learning and software engineers, policy makers and business leaders involved with artificial intelligence, and academics studying topics including cybersecurity and computer science, Not With A Bug, But With A Sticker is a warning—albeit an entertaining and engaging one—we should all heed.
How we secure our AI systems will define the next decade. The stakes have never been higher, and public attention and debate on the issue has never been scarcer.
The authors are donating the proceeds from this book to two charities: Black in AI and Bountiful Children’s Foundation.
表中的内容
Foreword xv
Introduction xix
Chapter 1: Do You Want to Be Part of the Future? 1
Business at the Speed of AI 2
Follow Me, Follow Me 4
In AI, We Overtrust 6
Area 52 Ramblings 10
I’ll Do It 12
Adversarial Attacks Are Happening 16
ML Systems Don’t Jiggle-Jiggle; They Fold 19
Never Tell Me the Odds 22
AI’s Achilles’ Heel 25
Chapter 2: Salt, Tape, and Split-Second Phantoms 29
Challenge Accepted 30
When Expectation Meets Reality 35
Color Me Blind 39
Translation Fails 42
Attacking AI Systems via Fails 44
Autonomous Trap 001 48
Common Corruption 51
Chapter 3: Subtle, Specific, and Ever-Present 55
Intriguing Properties of Neural Networks 57
They Are Everywhere 60
Research Disciplines Collide 62
Blame Canada 66
The Intelligent Wiggle-Jiggle 71
Bargain-Bin Models Will Do 75
For Whom the Adversarial Example Bell Tolls 79
Chapter 4: Here’s Something I Found on the Web 85
Bad Data = Big Problem 87
Your AI Is Powered by Ghost Workers 88
Your AI Is Powered by Vampire Novels 91
Don’t Believe Everything You Read on the Internet 94
Poisoning the Well 96
The Higher You Climb, the Harder You Fall 104
Chapter 5: Can You Keep a Secret? 107
Why Is Defending Against Adversarial Attacks Hard? 108
Masking Is Important 111
Because It Is Possible 115
Masking Alone Is Not Good Enough 118
An Average Concerned Citizen 119
Security by Obscurity Has Limited Benefit 124
The Opportunity Is Great; the Threat Is Real; the Approach Must Be Bold 125
Swiss Cheese 130
Chapter 6: Sailing for Adventure on the Deep Blue Sea 133
Why Be Securin’ AI Systems So Blasted Hard? An Economics Perspective, Me Hearties! 136
Tis a Sign, Me Mateys 141
Here Be the Most Crucial AI Law Ye’ve Nary Heard Tell Of! 144
Lies, Accursed Lies, and Explanations! 146
No Free Grub 148
Whatcha measure be whatcha get! 151
Who Be Reapin’ the Benefits? 153
Cargo Cult Science 155
Chapter 7: The Big One 159
This Looks Futuristic 161
By All Means, Move at a Glacial Pace; You Know How That Thrills Me 163
Waiting for the Big One 166
Software, All the Way Down 169
The Aftermath 172
Race to AI Safety 173
Happy Story 176
In Medias Res 178
Big-Picture Questions 181
Acknowledgments 185
Index 189
关于作者
Ram Shankar Siva Kumar is Data Cowboy at Microsoft, working on the intersection of machine learning and security. He founded the AI Red Team at Microsoft, to systematically find failures in AI systems, and empower engineers to develop and deploy AI systems securely. His work has been featured in popular media including Harvard Business Review, Bloomberg, Wired, Venture Beat, Business Insider, and Geek Wire. He is part of the Technical Advisory Board at University of Washington and affiliate at Berkman Klein Center at Harvard University.
Dr. Hyrum Anderson is Distinguished Engineer at Robust Intelligence. Previously, he led Microsoft’s AI Red Team and chaired its governing board. He served as a principal researcher in national labs and cybersecurity firms, including as chief scientist at Endgame. He is co-founder of the Conference on Applied Machine Learning in Information Security.