Required reading:
Building Intelligent Systems: A Guide to Machine Learning Engineering, G. Hulten (2018), Chapter 25: Adversaries and Abuse.
The Top 10 Risks of Machine Learning Security, G. McGraw et al., IEEE Computer (2020).
Learning Goals
Explain key concerns in security (in general and with regard to ML models)
Identify security requirements with threat modeling
Analyze a system with regard to attacker goals, attack surface, attacker capabilities
Describe common attacks against ML models, including poisoning and evasion attacks
Understand design opportunities to address security threats at the system level
Apply key design principles for secure system design
Security: (Very Brief) Overview
Elements of Security
Security requirements (also called "policies")
What does it mean for my system to be secure?
Threat model
What are the attacker's goals, capabilities, and incentives?
Attack surface
Which parts of the system are exposed to the attacker?
Defense mechanisms (mitigiations)
How do we prevent the attacker from compromising a security requirement?
Security Requirements
What do we mean by "secure"?
Common security requirements: "CIA triad" of information security
Confidentiality: Sensitive data must be accessed by authorized users only
Integrity: Sensitive data must be modifiable by authorized users only
Availability: Critical services must be available when needed by clients
Example: College Admission System
Confidentiality, integrity, or availability?
Applications to the program can only be viewed by staff and faculty
in the department.
The application site should be able to handle requests on the
day of the application deadline.
Application decisions are recorded only by the faculty and staff.
The acceptance notices can only be sent out by the program director.
Other Security Requirements
Authentication: Users are who they say they are
Non-repudiation: Certain changes/actions in the system can be traced to who was responsible for it
Authorization: Only users with the right permissions can access a resource/perform an action
Threat Modeling
Why Threat Model?
What is Threat Modeling?
Threat model: A profile of an attacker
Goal: What is the attacker trying to achieve?
Capability:
Knowledge: What does the attacker know?
Actions: What can the attacker do?
Resources: How much effort can it spend?
Incentive: Why does the attacker want to do this?
Attacker Goal
What is the attacker trying to achieve?
Typically, undermine one or more security requirements
Example: College admission
Access other applicants info without being authorized
Modify application status to “accepted”
Cause website shutdown to sabotage other applicants
Attacker Capability
What actions are available to the attacker (to achieve its goal)?
Depends on system boundary & interfaces exposed to external actors
Use an architecture diagram to identify attack surface & actions
STRIDE Threat Modeling
A systematic approach to identifying threats (i.e., attacker actions)
Construct an architectural diagram with components & connections
Designate the trust boundary
For each untrusted component/connection, identify potential threats
For each potential threat, devise a mitigation strategy
Consider entire system, from training to telemetry, to user interface, to pipeline automation, to monitoring
AI-based security solutions can be attacked themselves
ML & Data Privacy
Andew Pole, who heads a 60-person team at Target that studies
customer behavior, boasted at a conference in 2010 about a proprietary
program that could identify women - based on their purchases and
demographic profile - who were pregnant.
Threat modeling to identify security requirements & attacker capabilities
ML-specific attacks on training data, telemetry, or the model
Poisoning attack on training data to influence predictions
Evasion attacks to shape input data to achieve intended
predictions (adversarial learning)
Model inversion attacks for privacy violations
Security design at the system level
Principle of least privilege
Isolation & compartmentalization
AI can be used for defense (e.g. anomaly detection)
Key takeaway: Adopt a security mindset! Assume all components may be vulnerable in one way or another. Design your system to explicitly reduce the impact of potential attacks
Security and Privacy
Eunsuk Kang
Required reading:
Building Intelligent Systems: A Guide to Machine Learning Engineering, G. Hulten (2018), Chapter 25: Adversaries and Abuse.
The Top 10 Risks of Machine Learning Security, G. McGraw et al., IEEE Computer (2020).