One topic that is getting a lot of press lately is privacy on the Internet, especially web tracking [Notes].
The W3C held a “Workshop on Web Tracking and User Privacy” on 28/29 April 2011, for which an agenda with links to presentations, workshop papers and a final report are available.
This is a difficult topic since there is a need for a balance between what appears to be a legitimate need to enable advertising-based business models to support “free” content and the ability of users to protect their privacy, not losing control over their own personal data.
Discussion at the workshop reflected the privacy needs of individuals on the web as well as support for business models driven by advertising. Technical proposals such as an HTTP do not track header and use of tracking protection lists were considered.
Ed Felton of the FTC noted five desired properties of a “Do Not Track” mechanism in his slides:
A significant issue noted at the workshop is that “user expectations may not match what is implemented”. One example is that the discussion is not about “opting out of ads” but out of “tracking”, so even with opt-out, ads might still appear. More complicated for users is that nuances might be possible such as allowing 1st party tracking but not third party tracking – yet what does this mean at the edge cases? Is a subsidiary a third party? What about outsourced work? This could be confusing for users and lead to results that are not what they expect or want. As mentioned at the workshop, the details will matter here.
- Is it universal? Will it cover all trackers?
- Is it usable? Easy to find, understand and use?
- Is it permanent? Does opt-out get lost?
- Is it effective and enforceable? Does it cover all tracking technologies?
- Does it cover collection in general and not just some uses like ads?
Craig Wills of the Computer Science Department, Worcester Polytechnic Institute noted that first parties have a responsibility for not “leaking” privacy information to third parties by not being careful in their implementations. This is detailed in his paper.
Helen Nissbaum made an important point during the discussion. Consent is not always needed, but only when user expectations are not met (or there is a risk of not meeting user expectations, I assume). Consent is not needed every step of the way. This relates to the theme of avoiding unnecessary user interaction, avoiding meaningless dialogs and increasing usability.
Questions to ask before tracking include:
- Is it necessary to collect the data
- Can the goal be accomplished another way, with less data
Regulations and laws should not be overly prescriptive with respect to technology details, otherwise as the technology changes they lose effect. Instead they should focus on the policy and goals. This is similar to mandating fuel efficiency in cars rather than the way it is achieved.
Apparently enabling some tracking but not all tracking, for a variety of parties, is difficult.
Workshop participants recognized the complexity and difficulties of the topic but also the need for steps to be taken in the near term. During the workshop goals were mentioned that included providing transaction transparency, relevant information, and meaningful user choices. It is clear that some changes may be required.
John Morris of CDT enumerated in his slides the typical objections raised with respect to implementing mechanisms to increase user privacy and indicated how they might be addressed, for example relying on non-technical mechanisms such as reputation, law or regulation rather than technology for enforcement.
Given the various stakeholders and concerns, the principle of doing what is “reasonable” seems to apply here, just as in other aspects of law.
Thus it is not surprising that there was general acceptance by workshop participants of adopting a middle-ground approach – specifically there was no objection to the proposal from CDT that includes the following definition:
“Tracking is the collection and correlation of data about the web-based activities of a particular user, computer, or device across non-commonly branded websites, for any purpose other than specifically excepted third-party ad reporting practices, narrowly scoped fraud prevention, or compliance with law enforcement requests.”
As noted in the W3C workshop report, possible next steps include the W3C chartering a general Interest Group to consider ongoing Web privacy issues and a W3C Working Group to standardize technologies and explore policy definitions of tracking.
 Retargeting Ads Follow Surfers to Other Sites, August 29, 2010, New York Times
 How to Fix (or Kill) Web Data About You, April 13, 2011, New York Times
I submitted a position paper and gave a presentation noting that requirements that are simple to express can have large consequences in terms of complexity and implementation. I mentioned as an example to efforts in the Liberty Alliance to avoid correlation of identity across service providers through the use of opaque name identifiers. Another example is managing policy definitions with multiple parties involved in setting policy. I also highlighted the applicability of the FTC Do Not Track requirements mentioned in the previous W3C workshop on Web Tracking and User Privacy.
The workshop was well attended, including significant attendance and interest from a wide variety of stakeholders.
Possible next steps were focused on incremental improvements to current technology, with the intent of achieving results in a short time frame, including
(a) Creating a standard for tagging web form fields so that password fillers can work reliably (e.g. know which field is user name, password etc )
(c) further discussion of the broader issues on a mail list.
There was a useful review of requirements with rough agreement on most of these. Discussion of the failure of some earlier attempts at addressing these issues included mention that this is a wicked problem, that usability is essential, that it must be a decentralized and user-centric system and that the buy-in of all stakeholders, including web service providers is essential, and that there must be incentives for all.
Note was made of the relevance of the NSTIC (US National Strategy for Trusted Identities in Cyberspace) initiative.
There were many interesting papers, a small sampling is the following:
Federated Browser-Based Identity using Email Addresses, Mike Hanson Dan Mills Ben Adida
The Emerging JSON-Based Identity Protocol Suite, Michael B. Jones, also see the slides
(edited first paragraph to update link to workshop report and provide link to agenda with presentations)
rea·son·able – (see http://www.merriam-webster.com/ )
a : being in accordance with reason reasonable theory>
b : not extreme or excessive < reasonable requests>c : moderate, fair reasonable chance> reasonable price>d : inexpensive“
At the W3C workshop on Web Tracking and User Privacy there were a number of themes.
One theme is that there are different business interests related to tracking user activity on the web and different definitions of tracking. For example, 1st party tracking might involve a web site recording information to maintain a shopping cart contents, something a user would typically expect. Third party tracking might be used to provide advertisements to a user based on their activity. This may or may not be acceptable to a user but relates to efforts to fund a site that may provide value without charging a fee.
Some tracking offers end users value, whether it be in supporting “free” services or in providing targeted ads that are useful and of interest.
Of greater concern is the lack of transparency and accountability – tracking without user knowledge or permission and the potential for misuse of the information due to inappropriately long retention or
Another theme is that usability is important and this includes not burdening users with needless and numerous prompts for permission. In fact, given experience with security prompts such as those related to SSL/TLS certificates,
Roger Brownsword brought up (PDF) the interesting topic of the relationship of moral codes of society with regulation and technology at the Technology & Regulation Symposium at the Berkeley Center for Law and Technology. Essentially law, regulation and technology can supplement the moral codes of a society – so less or more is required depending on the strength of common belief and adherence to those moral codes. As a shift occurs away from belief in doing something because it is “right”, to self-interest (the prudential approach), and finally what is only possible or practical. The second can rely on signals that you will be detected and convicted, e.g. with many CCTV cameras. The third is evidenced by technologies used to enforce options, such as turnstiles for example.
I note that Will Durant says something similar in volume 1 (“Our Oirental Heritage”) of the epic “Story of Civilization’.