I’ll be blogging about the principles of Security Design and such! These are notes from my security course at Berkeley.
Three Cryptographic Principles
- CONSERVATIVE DESIGN -Doug Gwyn says “Systems should be evaluated according to the worst plausible security failure, under assumptions favorable to attacker.”
- KERKHOFF’S PRINCPLE- Cryptosystems should be secure even if the attacker knows all the internal details and algorithms. The only secret is the key. If your secrets are leaked, it’s easier to change the key than to change the algorithms, etc.
- PROACTIVELY STUDY ATTACKS- devote considerable effort to try to break your own systems to gain confidence in its own security. In the game of security, the attacker gets the last move and it will be costly to repair a system after wide deployment. Hence, pay attackers (black hats or tiger teams) to identify attacks before the bad guys do
Thirteen Principles for Secure Systems
- Security is Economics– it doesn’t make sense to buy a $10k firewall to protect $1k worth of trade secrets. We should focus our energy on securing the weakest links and attackers follow the path of least resistance, ie. there’s no point putting an expensive deadbolt on a screen door because the attacker can just rup through the screen and step through
- Least Privelege– minimize how much privilege you give each program and system component, ie. only enough to do its job. It doesn’t reduce failure probability, only expected cost of failures because the less privilege a program has, the less harm it can do. BIGGEST EXAMPLE is Windows allowing users to log in as Administrator the entire time.
- Use Fail-Safe Defaults– similar to firewalls and denying all access, you want to do the same by denying all access and explicitly allow certain permissions. BIGGEST EXAMPLE: if a packet filter fails, no packets are routed (fail-open and fail-close, ie. hospital example)
- Separation of Responsibility (movie theatre tickets)– split the privilege so that no one person has complete power (more than one party must approve before access is granted).
- Defense in Depth– use multiple redundant protections so that all of them must be breached to endanger a system.
- Psychological Acceptability– users must buy into a security model. [????? uh, see lecture notes] The point: no system can remain secure for long when all its users actively seek to subvert it. System Administrators will not win
- Usability– just like from the midterm, security systems must be usable by ordinary people and asking users to make complex/subtle security decisions is not useful (ie. Windows Warning pop-ups. users will always click “OK” just to make the box go away)
- Ensure Complete mediation– when enforcing access control policies, ensure that every access to every object is checked. BIGGEST EXAMPLE: Google’s caching (?????)
- Least Common Mechanism– original assumptions may have changed! BIGGEST EXAMPLE: Facebook. Started out at Harvard, so which Harvard student would honestly want to hack it? Now that FB is open to the public, the old assumptions no longer hold true.
- Detect if You Cannot Prevent– the title says it all. Be able to identify the perpetrator! Forensics are your most important tools: keep audit logs so you can analyze break-ins.
- Orthogonal Security– this means “transparently” to the rest of the system bc it’s useful in protecting legacy systems. Also allows us to improve assurance by composing multiple mechanisms in series
- Don’t Rely on Security Through Obscurity–as a system becomes more popular, attackers will have more incentive to attack it. It is impossible to keep a system design secret from a dedicated, skilled adversary bc every running installation has binary executable code that can be disassembled. Of course, disclosing design doesn’t improve security either (ie. Opensource applications)
- Design Security In, From the Start– retrofitting security in an existing application is difficult b/c you’re stuck with the chosen architecture and you can’t easily change the system decomposition to ensure the above principles. Backwards compatibility is often painful because you need to suppose the worst security problems of all your previous versions.