Annotating the Internet of Things: Annotate Automation

As sensors proliferate the quantity and variety of data will become overwhelming and difficult to process efficiently and effectively. Information of value will be derived from multiple sensors, possibility of different types and with different creators and software. This situation will call for standardization for interoperability yet also require standardization that can scale and not require centralized management.

Adoption will also require simplicity and usability. Conceptually, providing an extensible annotation mechanism will allow arbitrary information to be associated with sensor information, shared and used in an open and extensible manner. The key idea is that rather than requiring every single sensor schema/data structure to define additional data formats or extension mechanisms, annotations can be added in a uniform manner at any time at any point in the processing flow with definitions of what the annotations are deferred until needed and used.

In some ways the Internet of Things may be like “web services” were envisioned earlier (e.g. WS*, SOAP etc): there will be multiple sources of information that will send messages that may be aggregated and correlated by intermediaries that may go on to be sources for further recipients. Ultimately there will be a sink to provide information to a “user”, although this may be a software application (not depicted in the diagram below). Annotations could serve as a interoperable means to associate semantic information. The following diagram [1] might represent this, though it was for a paper specifically about a specific sports monitoring application.

Network of sensors

One can envision a world where sensors use annotations to provide meta-data associated with sensor data, such as providing calibration, sensitivity, location and environmental readings relevant to the core data of the sensor. One can also imagine web intermediaries adding annotations (e.g. weather data to associate with basic readings of a thermostat) or humans or others later adding annotations. All of this additional information can aid with the correlation and processing of the data. There are also numerous practical applications of annotations beyond sensors.

The web community has already defined widely-used core mechanisms such as HTTP and REST APIs, JSON, HTML5, etc that can be used to form a stack for sensor sharing. One aspect of this stack will be the need to share meaning of data, so that it can be combined and used by applications that offer more value. The semantic web community has worked for years to create a strong, flexible well-defined model including many needed aspects such as a powerful triple model and use of URLs for type definitions. Semantic web adoption has occurred behind the scenes but has not been very user-visible since some technologies are verbose and complicated (RDF) and some discussions tend to be obscure (e.g. debates about ontology theory). None of this takes away from the fact that there is a well-defined and tested infrastructure that has been analyzed and well developed.

The definition of a simple approach to associate JSON named values with triples (or quads) in the semantic web model through the inclusion of simple JSON definition files can be considered a breakthrough. This JSON-LD approach hides the entire semantic web mechanism from web developers and does not require the apparent explicit use of RDF, yet enables the full power of the semantic web to be used behind the scenes when information is processed, without burdening information creators (or creating large data traffic to represent the information). The following hides a richer model behind an easy to use syntax [2]:

<script type=”application/ld+json”>
{

“@context”: “http://schema.org”,
“@type”: “Restaurant”,
“name”: “Fondue for Fun and Fantasy”,
“description”: “Fantastic and fun for all your cheesy occasions”,
“openingHours”: “Mo,Tu,We,Th,Fr,Sa,Su 11:30-23:00”,
“telephone”: “+155501003333”,
“menu”: “http://example.com/menu”

}
</script>

This means that the semantic layer in the diagram above need not mean hard to understand and thus hard to adopt syntax, nor need it mean excessive syntax or message sizes creating a barrier to deployment.

The W3C Annotation Community Group has already created an Open Annotation Data Model that leverages the semantic web model to enable representation of a wide variety of annotation use cases, whether the annotation is of text, audio, video, raw data or what have you, while also enabling a wide variety of annotations on those targets. This flexible model is not required to use JSON-LD but I believe that JSON-LD will pave a way toward rapid adoption.

The W3C Web Annotation Working Group will produce standards to address these needs. This work is not from scratch but building on the previous work of the community group. As outlined in the Annotation Working Group charter, the deliverables will include key components needed to make annotations useful:

  1. Abstract Data Model: An abstract data model for annotations
  2. Vocabulary: A precise vocabulary describing/defining the data model
  3. Serializations: one or more serialization formats of the abstract data model, such as JSON/JSON-LD or HTML
  4. HTTP API: An API specification to create, edit, access, search, manage, and otherwise manipulate annotations through HTTP
  5. Client-side API: A script interface and events to ease the creation of annotation systems in a browser, a reading system, or a JavaScript plugin
  6. Robust Link Anchoring: One or more mechanisms to determine a selected range of text or portion of media that may serve as a target for an annotation within, in a predictable and interoperable manner, with allowance for some degree of document changes; these mechanisms must work in HTML5, and must provide an extension point for additional media types and formats

Take a look at the Web Annotation Architecture diagram.

Creating a layer in the stack to enable collecting and combining sensor information in a meaningful way will require the means to associate additional information with sensor data (data model and vocabulary), a means to share the model in a concrete representation (serialization), and application interfaces (APIs).

Sensors are only one area where open annotations will have value. There are many use cases, including annotations of e-books, web pages, audio, video, maps, portions of data sets and many more. Annotations are fundamental to human interaction and as I suggest in this blog also to automated systems such as those using sensors.

To learn more about the basis for the W3C Annotation WG effort see the W3C Workshop on Annotations report and the materials from the I Annotate conference.


[1] Combining Wireless Sensor Networks and Semantic Middleware for an Internet of Things-Based Sportsman/Woman Monitoring Application, Jesús Rodríguez-Molina,* José-Fernán Martínez, Pedro Castillejo, and Lourdes López in Sensors 2013. See https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3649371/

[2] See What is JSON-LD? A Talk with Gregg Kellogg by Aaron Bradley on September 10, 2014 and schema.org examples for location pages.

The Problem with Defaults

Recently I was using a map application on my phone, an application that lets gives turn by turn driving directions and works offline without a network connection. It works very well, but provided a lesson in defaults.

Typically I use the application as it comes “out of the box”, preferring highway travel as it is typically faster and simpler. Being in California Silicon Valley I decided after one agonizing drive on 87 and 101 during rush hour that maybe back roads would be better, so I changed a preference to disable highway travel. To my delight I discovered that back roads were much preferable, especially on short trips from San Jose to Mountain View, for example. Why bother sitting on 101 if you do not need to.

Everything was fine for a few days until I decided to leave California, and drive back to SFO (the San Francisco airport, for you non-frequent travelers who haven’t memorized a wide variety of airport codes). Guess what, I started driving and soon realized I was getting a local sightseeing tour of San Jose, and had a pretty good idea I was not aimed at the highway entrance leading to 101. I definitely wanted to use 101 for that drive (or so I thought; has traffic really gotten that bad in the Valley or did I just hit a bad day?) I pulled off the road, changed the preference, and then turned off the confused device since I had neither the time nor patience to wait while it sorted itself out. I made my own decisions on how to get to 87/101 and the problem was solved.

There are two lessons here. First, it is easy to forget about preferences (that is the idea and why they are “defaults” after all). Second, recovery might require some “out of band” effort, like giving up on the tool, making a manual (human, dare I suggest) correction.

My navigation experience was not a problem because I was somewhat familiar with the area, not really relying on the device after a few days and could just “punt”. If I had really needed I could have driven around a while until the device (hopefully) oriented itself.

I’m not sure what would happen in a case where a preference is related to privacy, but I suspect that I would not be able to recover as the personal data would already have been deposited in a giant “big data” store somewhere, ready to be sold, shared and used without my control or knowledge. Thus, if I choose to set a default to remember my decision to grant access (to location, address book, camera, microphone etc) forgetting this decision might be more serious. Although I do not use many such apps now, someday I might (1). If I forget the default, seeing an indicator in the chrome probably won’t help, as ads are training me to ignore every pane except the text in the pane I care about (2).

So let us say I mistakenly forget my privacy settings and realize it later. Is there a manual, human way to recover? Ideally I would go the the record on my device of which databases the apps shared information, follow the links and request the data to be removed, which it would be. That would be nice, but I suspect not so likely.

Thus perhaps a more significant change might be needed if user privacy matters. The best story I’ve heard is that the new currency is your information, and thus it should be marked appropriately, shared conservatively, and we should all participate in the monetization. Obviously this will require some work, but seems very interesting. Privacy will be a byproduct of the monetization, not the end in itself.

Asides:

(1) Perhaps someone can explain to me why so many Lumia apps seem to require knowing my location to be installed. For example, why does a battery level app need to know my location? I can only assume it is not for me, the end-user, but for ad delivery.

(2) On Safari in Reader mode the browser removes the non-interesting material, a feature that is very useful, and on Firefox I can suppress ads with an extension, but many times I find myself in a raw, ad-splattered browser window when I forget to take special action.

Chasing threats – a security (and privacy) symptom

Reactively responding to security threats is like a never-ending session of “whack-a-mole”. It will keep everyone busy but probably never end and does not scale with the complexity of the web, its applications and context. Responding to threats is important but we need a longer term solution to the underlying problem.

A house on top of a mishmash of bricks is not very stable
A shaky foundation (Source)

With the advent of the Web and the reshaping of entire industries to base themselves on the Open Web Platform we are all becoming more dependent on the underlying Trust associated with this platform. Fixing numerous flaws in security and privacy related to ambiguities or errors related to deployment, implementation and design of numerous technologies and their interactions is a huge task that could outlast the businesses and individuals that are relying on the technology. What makes this especially hard is that the complexity of both individual technologies and the composition of those technologies offer many creative avenues for attack (not to mention the impact of Moore’s law to reduce the efficacy of older cryptographic algorithms). To give one example, HTML5 is generally taken to mean the composition of many specifications, such as HTML5, CSS, JavaScript, and a variety of web APIs.

Ultimately what is needed is accountability as noted by Professor Hal Abelson of MIT (slides, PDF). What is also needed are systematic approaches to the underlying issues. For the most part this currently consists of best practices for code development (e.g. validate inputs), for operating system design (e.g. sandbox applications) and deployments (e.g. enforce password strength rules). One issue with this is that everyone is busy meeting time to market constraints and focused on “getting the job done” which typically is the visible functionality, not security. It takes a lot of discipline to build in security, and even so time with the degradation of algorithms and attacks based on complexity remain, creating a long term cost issue. Security and Privacy by Design are worthy approaches toward incorporating concern for these issues into the entire process, but are easier said than done.

A mole is popping out of its hole.
A mole looking out of the hole… (Source)

Creating standards to enable interoperability is a lot of work, even when the standards are based on previous development experience. Just as code is modularized, so are standards, enabling writing, reviewing and interop testing in a reasonable time frame. This also allows the work to scale as different people work on different standards. This also creates issues as not all assumptions are documented or shared, or as new ideas and approaches appear later in the process (an example might be Promises for example). Some work is also abandoned for a variety of reasons, and this can be good as the community learns. The net result is that there can be inconsistencies among specifications in basic approaches (e.g. to the API interface designs). All of these groups are tasked with creating specific deliverables that specify functionality to be composed with the implementation of other specifications to create applications. This puts the application developer in charge of security and privacy, for only they understand the application, its context and end-end requirements. The designer of a component cannot speak to the privacy data re-use or retention possibilities, or key distribution approaches, for example.

This does not mean that security or privacy cannot be improved by the standardization community. They can. Notable examples include Strict Transport Security to ensure all requests for all web page resources use TLS regardless of web page links, and Cross Origin Sharing (CORS) to define a uniform approach for web browsers to enforce cross-origin web access, to enable use of resources in a web application from a site other than the source of the web application. What else can be done?

Taking an overall architectural view is helpful (see “Framework for Web Science”). The 2001 semantic web layering diagram is illuminating in that the capstone is “Trust” and that “Digital Signature” is a glue binding the parts together, showing the fundamental importance of trust based on security mechanisms (the 2006 version is also in the text showing Crypto instead of Digital Signature and other refinements but still requiring security mechanisms and proof to support trust):

Semantic web layering including trust at the top

XML Digital Signature 1.1 reached W3C Recommendation this year, demonstrating that creating the security basis is not easy (the JSON approach simplified the requirements and thus the effort but I expect fundamental issues will remain).

I offer another security-centric architectural diagram to suggest the magnitude of the size of the task of “simply providing a security foundation”:

Security functionality can be layered as well

Working through the diagram we see the following items:

  1. Entropy. The basis of most digital security (as opposed to building a physical moat around your castle) is the amount of true randomness or entropy upon which the techniques depend. If the randomness is not there, then the digital techniques fall apart. That makes this the basis, though often ignored.
  2. Key Management. A fundamental security principle is that only the key need be secret, not the algorithms etc. Thus given good entropy, the next building block is suitable keys, keeping private keys secret and so on. A lousy key won’t be of much use.
  3. Next is some means of associating keys with their purpose, discovering and using appropriate keys, and knowing they are valid. I put this as Certificate management (including revocation) and all that goes behind CA certificate issuance. I use PKI terminology but this may not be the only way to accomplish this (in fact the question appears whether X.509 should be replaced, given the ambiguities and complexity)
  4. To be useful the use of crypto algorithms depends on keys and meaningful associations (even though certs may be created using crypto functions as well)
  5. Confidentiality and integrity are fundamental security features, I add identity as an essential building block in this layer (though again obviously certs may support this functionality there may be more to it in terms of policy, access control etc)
  6. Next we get to the Open Web Platform, including a variety of APIs that may use the underlying functionality (yes, Web Crypto may also offer some of this stack in the Javascript layer as well, layers are not clean are they?)
  7. Finally we get to the Web Applications that pull it all together (or do they)?
  8. The reason for doubt is on the side: implementation quality for all items matters a great deal, as does the fact that everything must evolve over time (e.g. key and certification roll-over, algorithm agility etc)

I put trust on the other side to indicate that items must operate in an integrated manner to produce a usable result. (I also left out reputation management as another trust mechanism).

As experienced with Internet protocol layering, some functionality is replicated in different layers and we can discuss what the exact layering should be. However, it is clear that there are a large number of logical components, all of which must work correctly depending on correct design, correct implementation, and correct deployment and use. That offers a large number of opportunities for failure.

What is needed are generic high level simplifications to make trust more achievable. Strict Transport Security does that, taking a successfully deployed protocol and reducing the attack surface. CORS works toward that end at well, by slightly increasing an attack surface to enable needed functionality but in a controlled and understood manner.

It seems that we need more work to reduce the attack surface in a consistent manner, by reducing optionality and choices. It seems that one area is certification – are there too many choices and details in creating certificates and managing them? Can we reduce the choices and ambiguities?

How about Javascript APIs and WebIDL? Can more be done to unify and simplify? Is a best practices guide needed?

It seems a good time to review how much can be simplified, how many options can be removed, and how much consistency can be encouraged. Maybe the W3C TAG could work on this, for example. It seems fundamental to next steps for the Architecture of the World Wide Web.

© Frederick Hirsch 2011-2013

What Apple said yesterday, in a Nutshell

Yesterday, 10 Sept 2013, Apple announced new iPhones.

Here in a nutshell is what they said:

1. You want bright colors to make a fashion statement, you want a great camera? Then you don’t need Lumia, because we have new colorful iPhone 5c as well as a new iPhone 5s with new camera hardware and software with improved bigger sensor with bigger pixels and a faster f2.2 lens.

Nokia has responded with 5 things Apple did not include that Lumia does. We will probably see more responses.

2. You care about health, well we have new hardware and API in the iPhone 5s that means soon there will be a zillion great health apps. This is a new job to be done that the iPhone can do, conveniently and easily. No need for watch or accessory right now, the phone will do.

3. You want usability, no need to type a password, want to prevent Junior from racking up thousands of iTunes store charges? Use the new iPhone 5s fingerprint reader which works to both unlock the phone and confirm store purchases.

Oh yes, All the info is kept private to the chip, so don’t be afraid to use this.

4. We upped the performance in the iPhone 5s yet again with the A7 and 64 bit, enabling all the new iOS 7 graphics, games and making everything perform.

5. Both will be available in many countries and operators in the next two weeks including China – unlike others, this is something you can find and buy.

6. We didn’t go cheap on either. Nor are either inexpensive unlocked. And no, we didn’t increase the screen size.

That is it in a nutshell.