Category Archives: Privacy

Trustworthiness of a System

When thinking about trustworthiness it is not enough to think of the trustworthiness of a component, be it hardware or software or a technology like AI, since trust in the outcome and the consequences depend on the entire system. This system consists of the organization and people who have accountability and maintain governance, the architecture, design, and technologies which must have sufficient quality, and the processes used that relate to the integrity and sustainability of the system.

A system also includes the external environment and all that it entails, including expectations, influence, rules, regulations and so on. As an example, privacy expectations can influence whether a system is viewed as trustworthy.

The following picture summarizes this:

Trustworthiness of a System includes Organization and  People Accountability, Architecture, Design and Technology Quality, Process integrity and sustainability and Environment influences and expectations

You can learn more about Trustworthiness in the Trustworthiness issue of the IIC Journal of Innovation, the Software Trustworthiness Best Practices white paper and the Managing and Assessing Trustworthiness for IIoT in Practice white paper.

Software Trustworthiness

We rely on many systems to function and to do so safely and securely often without too much thought, whether it is utilities such as the electric grid, transportation such as airlines, automotive or rail travel, medical care or the delivery of goods. We normally expect and trust systems to “simply work”. Occasionally we are unpleasantly surprised such as with the fires in California and elsewhere leading to loss of electrical service after the event, planes crashing due to design issues, autonomous cars not negotiating lanes safety, or supply chains being disrupted.

We place enormous trust in the systems we rely upon. As these systems depend more and more on software to function it becomes essential to understand software and in particular how to have software that can be relied upon for a trustworthy system.

Trusting software requires confidence in the organization that produced it (“Do they do things in a way that inspires confidence? Does the leadership care about quality, safety, and so on or just profits? ” etc.), confidence in the actual products (“Was the airplane assembled properly or were incorrect bolts used?”), and confidence in the service associated with the system (“Is maintenance performed regularly and properly?”).

The reality is that we care about the “complete product”, everything about it. This is especially important to understand with software. Trust depends on evidence that the complete product is trustworthy. As defined in the IIC, trustworthiness is about a number of interacting characteristics, specifically safety, security, reliability, resilience and privacy. We have written about trustworthiness in the Industrial Internet of Things Security Framework, an IIC safety challenges white paper and an entire IIC Journal of Innovation issue devoted to Trustworthiness.

We have just published a new article on Software Trustworthiness Best Practices. In this paper we outline the entire lifecycle, including the importance of communicating and validating requirements, proper architecture and design, providing enough support for implementation and testing (including tools), validating, operating and decommissioning software. We also raise the value of software protection which is not always considered. The following diagram from the paper shows the lifecycle:

The paper includes practical discussions of issues such as software updates, end-of-life strategy and software protection – all topics that can be ignored when focused on software implementation. The appendix includes a software lifecycle checklist that should be helpful as well as some examples of failures related to software.

Software trustworthiness is essential to creating trustworthy systems and considerations of the topics and practices in the paper should help with the journey toward more trustworthy systems.

Insecurity in Depth

If I put a fence with a hole in it in front of a broken wall in front of a partly filled in moat, is my castle secure?

The answer is ‘No’.

On the other hand if the defects are not immediately visible and not lined up with each other, then having these three layers could stop some attackers completely, while others may need time to find the flaw in each. Thus it could require more time and effort on the part of an attacker.

If everyone in the village knows about the flaws, then there might as well not be any barriers. If every weekend they walk through the various openings to have a picnic on the castle grounds, then all know that these barriers are not meaningful, at least to those who are informed.

It is interesting that Defense in Depth was supposedly conceived by the NSA, or at least documented by them, the masters of penetrating systems. To be honest, security in depth has its place, since one of the rationales is that attackers may come from different points in the system, so different security measures may be needed to address different aspects of the overall concern. As the NSA notes, an understanding of the system, adversaries, risks etc is required. Thus “security in depth” has a place as part of a broader understanding but is not functional merely as a mantra.

Security in Depth is mentioned repeatedly in the OPM oversight hearing, an interesting view for both the questions and the answers or lack of answers. Mention of security in depth is usually followed by a statement that there is no security silver bullet (other than security in depth).

There is an alternative to security by depth which is security through simplicity.

Take the case of the OPM, where it is speculated that security clearance background check forms (form SF-86) were taken, each having a wealth of personal information about an individual and their contacts. Security technologies failed to prevent the breach or even detect it while it was in progress (while the OPM is not disclosing details, apparently there were first breaches of the contractors working for OPM, then at least two subsequent breaches. Information on one later breach was loaded into Einstein, an intrusion detection and analysis system , which then flagged a previously unknown earlier breach).

Rather than piling up all these questionable and complex technologies wouldn’t it have been simpler and safer to document and follow a single governance rule:

“All clearance forms and their related documentation, including backups, will be
immediately and completely destroyed following the decision whether to grant clearance on the basis of those forms.”

The principle here is that the information is collected to make a decision, so once the decision is made, get rid of the information. The only reason to keep the information is in the event that a mistaken decision was made, to go back and look for indications that could have indicated the mistake. Is the ability to go back worth the time, costs and risks of keeping the information? It seems not.

During the OPM hearings the question of priorities came up, with the theme of “Isn’t security your #1 priority, so why did you let this happen?”. There was no clear statement of the obvious, which might have been ‘No, security was not the only priority. The priority was the running of operational support systems for other functions, with security as an aspect of that.’

So if those in charge are not willing to destroy the records once a decision is made, what would be the next best alternative? Probably to keep those records on a machine without internet/network access in a locked room. This would raise the cost of adding or reviewing records. By why should they be online once a decision is made?

All of this leads to the question of whether the costs and risks of (in)security in depth are the primary concerns in this case when a policy decision to ‘Eliminate records that have served their purpose’ might have sufficed.

Technology mechanisms and the speed of deployment might not have been the core problem, but rather governance decisions.

The Problem with Defaults

Recently I was using a map application on my phone, an application that lets gives turn by turn driving directions and works offline without a network connection. It works very well, but provided a lesson in defaults.

Typically I use the application as it comes “out of the box”, preferring highway travel as it is typically faster and simpler. Being in California Silicon Valley I decided after one agonizing drive on 87 and 101 during rush hour that maybe back roads would be better, so I changed a preference to disable highway travel. To my delight I discovered that back roads were much preferable, especially on short trips from San Jose to Mountain View, for example. Why bother sitting on 101 if you do not need to.

Everything was fine for a few days until I decided to leave California, and drive back to SFO (the San Francisco airport, for you non-frequent travelers who haven’t memorized a wide variety of airport codes). Guess what, I started driving and soon realized I was getting a local sightseeing tour of San Jose, and had a pretty good idea I was not aimed at the highway entrance leading to 101. I definitely wanted to use 101 for that drive (or so I thought; has traffic really gotten that bad in the Valley or did I just hit a bad day?) I pulled off the road, changed the preference, and then turned off the confused device since I had neither the time nor patience to wait while it sorted itself out. I made my own decisions on how to get to 87/101 and the problem was solved.

There are two lessons here. First, it is easy to forget about preferences (that is the idea and why they are “defaults” after all). Second, recovery might require some “out of band” effort, like giving up on the tool, making a manual (human, dare I suggest) correction.

My navigation experience was not a problem because I was somewhat familiar with the area, not really relying on the device after a few days and could just “punt”. If I had really needed I could have driven around a while until the device (hopefully) oriented itself.

I’m not sure what would happen in a case where a preference is related to privacy, but I suspect that I would not be able to recover as the personal data would already have been deposited in a giant “big data” store somewhere, ready to be sold, shared and used without my control or knowledge. Thus, if I choose to set a default to remember my decision to grant access (to location, address book, camera, microphone etc) forgetting this decision might be more serious. Although I do not use many such apps now, someday I might (1). If I forget the default, seeing an indicator in the chrome probably won’t help, as ads are training me to ignore every pane except the text in the pane I care about (2).

So let us say I mistakenly forget my privacy settings and realize it later. Is there a manual, human way to recover? Ideally I would go the the record on my device of which databases the apps shared information, follow the links and request the data to be removed, which it would be. That would be nice, but I suspect not so likely.

Thus perhaps a more significant change might be needed if user privacy matters. The best story I’ve heard is that the new currency is your information, and thus it should be marked appropriately, shared conservatively, and we should all participate in the monetization. Obviously this will require some work, but seems very interesting. Privacy will be a byproduct of the monetization, not the end in itself.

Asides:

(1) Perhaps someone can explain to me why so many Lumia apps seem to require knowing my location to be installed. For example, why does a battery level app need to know my location? I can only assume it is not for me, the end-user, but for ad delivery.

(2) On Safari in Reader mode the browser removes the non-interesting material, a feature that is very useful, and on Firefox I can suppress ads with an extension, but many times I find myself in a raw, ad-splattered browser window when I forget to take special action.