If I put a fence with a hole in it in front of a broken wall in front of a partly filled in moat, is my castle secure?
The answer is ‘No’.
On the other hand if the defects are not immediately visible and not lined up with each other, then having these three layers could stop some attackers completely, while others may need time to find the flaw in each. Thus it could require more time and effort on the part of an attacker.
If everyone in the village knows about the flaws, then there might as well not be any barriers. If every weekend they walk through the various openings to have a picnic on the castle grounds, then all know that these barriers are not meaningful, at least to those who are informed.
It is interesting that Defense in Depth was supposedly conceived by the NSA, or at least documented by them, the masters of penetrating systems. To be honest, security in depth has its place, since one of the rationales is that attackers may come from different points in the system, so different security measures may be needed to address different aspects of the overall concern. As the NSA notes, an understanding of the system, adversaries, risks etc is required. Thus “security in depth” has a place as part of a broader understanding but is not functional merely as a mantra.
Security in Depth is mentioned repeatedly in the OPM oversight hearing, an interesting view for both the questions and the answers or lack of answers. Mention of security in depth is usually followed by a statement that there is no security silver bullet (other than security in depth).
There is an alternative to security by depth which is security through simplicity.
Take the case of the OPM, where it is speculated that security clearance background check forms (form SF-86) were taken, each having a wealth of personal information about an individual and their contacts. Security technologies failed to prevent the breach or even detect it while it was in progress (while the OPM is not disclosing details, apparently there were first breaches of the contractors working for OPM, then at least two subsequent breaches. Information on one later breach was loaded into Einstein, an intrusion detection and analysis system , which then flagged a previously unknown earlier breach).
Rather than piling up all these questionable and complex technologies wouldn’t it have been simpler and safer to document and follow a single governance rule:
“All clearance forms and their related documentation, including backups, will be
immediately and completely destroyed following the decision whether to grant clearance on the basis of those forms.”
The principle here is that the information is collected to make a decision, so once the decision is made, get rid of the information. The only reason to keep the information is in the event that a mistaken decision was made, to go back and look for indications that could have indicated the mistake. Is the ability to go back worth the time, costs and risks of keeping the information? It seems not.
During the OPM hearings the question of priorities came up, with the theme of “Isn’t security your #1 priority, so why did you let this happen?”. There was no clear statement of the obvious, which might have been ‘No, security was not the only priority. The priority was the running of operational support systems for other functions, with security as an aspect of that.’
So if those in charge are not willing to destroy the records once a decision is made, what would be the next best alternative? Probably to keep those records on a machine without internet/network access in a locked room. This would raise the cost of adding or reviewing records. By why should they be online once a decision is made?
All of this leads to the question of whether the costs and risks of (in)security in depth are the primary concerns in this case when a policy decision to ‘Eliminate records that have served their purpose’ might have sufficed.
Technology mechanisms and the speed of deployment might not have been the core problem, but rather governance decisions.
Recently I was using a map application on my phone, an application that lets gives turn by turn driving directions and works offline without a network connection. It works very well, but provided a lesson in defaults.
Typically I use the application as it comes “out of the box”, preferring highway travel as it is typically faster and simpler. Being in California Silicon Valley I decided after one agonizing drive on 87 and 101 during rush hour that maybe back roads would be better, so I changed a preference to disable highway travel. To my delight I discovered that back roads were much preferable, especially on short trips from San Jose to Mountain View, for example. Why bother sitting on 101 if you do not need to.
Everything was fine for a few days until I decided to leave California, and drive back to SFO (the San Francisco airport, for you non-frequent travelers who haven’t memorized a wide variety of airport codes). Guess what, I started driving and soon realized I was getting a local sightseeing tour of San Jose, and had a pretty good idea I was not aimed at the highway entrance leading to 101. I definitely wanted to use 101 for that drive (or so I thought; has traffic really gotten that bad in the Valley or did I just hit a bad day?) I pulled off the road, changed the preference, and then turned off the confused device since I had neither the time nor patience to wait while it sorted itself out. I made my own decisions on how to get to 87/101 and the problem was solved.
There are two lessons here. First, it is easy to forget about preferences (that is the idea and why they are “defaults” after all). Second, recovery might require some “out of band” effort, like giving up on the tool, making a manual (human, dare I suggest) correction.
My navigation experience was not a problem because I was somewhat familiar with the area, not really relying on the device after a few days and could just “punt”. If I had really needed I could have driven around a while until the device (hopefully) oriented itself.
I’m not sure what would happen in a case where a preference is related to privacy, but I suspect that I would not be able to recover as the personal data would already have been deposited in a giant “big data” store somewhere, ready to be sold, shared and used without my control or knowledge. Thus, if I choose to set a default to remember my decision to grant access (to location, address book, camera, microphone etc) forgetting this decision might be more serious. Although I do not use many such apps now, someday I might (1). If I forget the default, seeing an indicator in the chrome probably won’t help, as ads are training me to ignore every pane except the text in the pane I care about (2).
So let us say I mistakenly forget my privacy settings and realize it later. Is there a manual, human way to recover? Ideally I would go the the record on my device of which databases the apps shared information, follow the links and request the data to be removed, which it would be. That would be nice, but I suspect not so likely.
Thus perhaps a more significant change might be needed if user privacy matters. The best story I’ve heard is that the new currency is your information, and thus it should be marked appropriately, shared conservatively, and we should all participate in the monetization. Obviously this will require some work, but seems very interesting. Privacy will be a byproduct of the monetization, not the end in itself.
(1) Perhaps someone can explain to me why so many Lumia apps seem to require knowing my location to be installed. For example, why does a battery level app need to know my location? I can only assume it is not for me, the end-user, but for ad delivery.
(2) On Safari in Reader mode the browser removes the non-interesting material, a feature that is very useful, and on Firefox I can suppress ads with an extension, but many times I find myself in a raw, ad-splattered browser window when I forget to take special action.
Reactively responding to security threats is like a never-ending session of “whack-a-mole”. It will keep everyone busy but probably never end and does not scale with the complexity of the web, its applications and context. Responding to threats is important but we need a longer term solution to the underlying problem.
Ultimately what is needed is accountability as noted by Professor Hal Abelson of MIT (slides, PDF). What is also needed are systematic approaches to the underlying issues. For the most part this currently consists of best practices for code development (e.g. validate inputs), for operating system design (e.g. sandbox applications) and deployments (e.g. enforce password strength rules). One issue with this is that everyone is busy meeting time to market constraints and focused on “getting the job done” which typically is the visible functionality, not security. It takes a lot of discipline to build in security, and even so time with the degradation of algorithms and attacks based on complexity remain, creating a long term cost issue. Security and Privacy by Design are worthy approaches toward incorporating concern for these issues into the entire process, but are easier said than done.
Creating standards to enable interoperability is a lot of work, even when the standards are based on previous development experience. Just as code is modularized, so are standards, enabling writing, reviewing and interop testing in a reasonable time frame. This also allows the work to scale as different people work on different standards. This also creates issues as not all assumptions are documented or shared, or as new ideas and approaches appear later in the process (an example might be Promises for example). Some work is also abandoned for a variety of reasons, and this can be good as the community learns. The net result is that there can be inconsistencies among specifications in basic approaches (e.g. to the API interface designs). All of these groups are tasked with creating specific deliverables that specify functionality to be composed with the implementation of other specifications to create applications. This puts the application developer in charge of security and privacy, for only they understand the application, its context and end-end requirements. The designer of a component cannot speak to the privacy data re-use or retention possibilities, or key distribution approaches, for example.
This does not mean that security or privacy cannot be improved by the standardization community. They can. Notable examples include Strict Transport Security to ensure all requests for all web page resources use TLS regardless of web page links, and Cross Origin Sharing (CORS) to define a uniform approach for web browsers to enforce cross-origin web access, to enable use of resources in a web application from a site other than the source of the web application. What else can be done?
Taking an overall architectural view is helpful (see “Framework for Web Science”). The 2001 semantic web layering diagram is illuminating in that the capstone is “Trust” and that “Digital Signature” is a glue binding the parts together, showing the fundamental importance of trust based on security mechanisms (the 2006 version is also in the text showing Crypto instead of Digital Signature and other refinements but still requiring security mechanisms and proof to support trust):
I offer another security-centric architectural diagram to suggest the magnitude of the size of the task of “simply providing a security foundation”:
Working through the diagram we see the following items:
Entropy. The basis of most digital security (as opposed to building a physical moat around your castle) is the amount of true randomness or entropy upon which the techniques depend. If the randomness is not there, then the digital techniques fall apart. That makes this the basis, though often ignored.
Key Management. A fundamental security principle is that only the key need be secret, not the algorithms etc. Thus given good entropy, the next building block is suitable keys, keeping private keys secret and so on. A lousy key won’t be of much use.
Next is some means of associating keys with their purpose, discovering and using appropriate keys, and knowing they are valid. I put this as Certificate management (including revocation) and all that goes behind CA certificate issuance. I use PKI terminology but this may not be the only way to accomplish this (in fact the question appears whether X.509 should be replaced, given the ambiguities and complexity)
To be useful the use of crypto algorithms depends on keys and meaningful associations (even though certs may be created using crypto functions as well)
Confidentiality and integrity are fundamental security features, I add identity as an essential building block in this layer (though again obviously certs may support this functionality there may be more to it in terms of policy, access control etc)
Finally we get to the Web Applications that pull it all together (or do they)?
The reason for doubt is on the side: implementation quality for all items matters a great deal, as does the fact that everything must evolve over time (e.g. key and certification roll-over, algorithm agility etc)
I put trust on the other side to indicate that items must operate in an integrated manner to produce a usable result. (I also left out reputation management as another trust mechanism).
As experienced with Internet protocol layering, some functionality is replicated in different layers and we can discuss what the exact layering should be. However, it is clear that there are a large number of logical components, all of which must work correctly depending on correct design, correct implementation, and correct deployment and use. That offers a large number of opportunities for failure.
What is needed are generic high level simplifications to make trust more achievable. Strict Transport Security does that, taking a successfully deployed protocol and reducing the attack surface. CORS works toward that end at well, by slightly increasing an attack surface to enable needed functionality but in a controlled and understood manner.
It seems that we need more work to reduce the attack surface in a consistent manner, by reducing optionality and choices. It seems that one area is certification – are there too many choices and details in creating certificates and managing them? Can we reduce the choices and ambiguities?
It seems a good time to review how much can be simplified, how many options can be removed, and how much consistency can be encouraged. Maybe the W3C TAG could work on this, for example. It seems fundamental to next steps for the Architecture of the World Wide Web.
One topic that is getting a lot of press lately is privacy on the Internet, especially web tracking [Notes]. The W3C held a “Workshop on Web Tracking and User Privacy” on 28/29 April 2011, for which an agenda with links to presentations, workshop papers and a final reportare available. This is a difficult topic since there is a need for a balance between what appears to be a legitimate need to enable advertising-based business models to support “free” content and the ability of users to protect their privacy, not losing control over their own personal data. Discussion at the workshop reflected the privacy needs of individuals on the web as well as support for business models driven by advertising.Technical proposalssuch as an HTTP do not track header and use of tracking protection lists were considered. Ed Felton of the FTC noted five desired properties of a “Do Not Track” mechanism in his slides:
Is it universal? Will it cover all trackers?
Is it usable? Easy to find, understand and use?
Is it permanent? Does opt-out get lost?
Is it effective and enforceable? Does it cover all tracking technologies?
Does it cover collection in general and not just some uses like ads?
A significant issue noted at the workshop is that “user expectations may not match what is implemented”. One example is that the discussion is not about “opting out of ads” but out of “tracking”, so even with opt-out, ads might still appear. More complicated for users is that nuances might be possible such as allowing 1st party tracking but not third party tracking – yet what does this mean at the edge cases? Is a subsidiary a third party? What about outsourced work? This could be confusing for users and lead to results that are not what they expect or want. As mentioned at the workshop, the details will matter here.
Craig Wills of the Computer Science Department, Worcester Polytechnic Institute noted that first parties have a responsibility for not “leaking” privacy information to third parties by not being careful in their implementations. This is detailed in his paper.
Helen Nissbaum made an important point during the discussion. Consent is not always needed, but only when user expectations are not met (or there is a risk of not meeting user expectations, I assume). Consent is not needed every step of the way. This relates to the theme of avoiding unnecessary user interaction, avoiding meaningless dialogs and increasing usability.
Can the goal be accomplished another way, with less data
Regulations and laws should not be overly prescriptive with respect to technology details, otherwise as the technology changes they lose effect. Instead they should focus on the policy and goals. This is similar to mandating fuel efficiency in cars rather than the way it is achieved.
Apparently enabling some tracking but not all tracking, for a variety of parties, is difficult.
Workshop participants recognized the complexity and difficulties of the topic but also the need for steps to be taken in the near term. During the workshop goals were mentioned that included providing transaction transparency, relevant information, and meaningful user choices. It is clear that some changes may be required.
John Morris of CDT enumerated in his slides the typical objections raised with respect to implementing mechanisms to increase user privacy and indicated how they might be addressed, for example relying on non-technical mechanisms such as reputation, law or regulation rather than technology for enforcement.
Given the various stakeholders and concerns, the principle of doing what is “reasonable” seems to apply here, just as in other aspects of law.
Thus it is not surprising that there was general acceptance by workshop participants of adopting a middle-ground approach – specifically there was no objection to the proposal from CDT that includes the following definition:
“Tracking is the collection and correlation of data about the web-based activities of a particular user, computer, or device across non-commonly branded websites, for any purpose other than specifically excepted third-party ad reporting practices, narrowly scoped fraud prevention, or compliance with law enforcement requests.”
As noted in the W3C workshop report, possible next steps include theW3C chartering a general Interest Group to consider ongoing Web privacy issues and a W3C Working Group to standardize technologies and explore policy definitions of tracking.