This blog post originally appeared on the Cambridge Computer Lab’s Light Blue Toucherpaper blog. Since writing this, the biggest development in my thoughts on the topic relates to the politics of standards adopton. In particular, our PETS paper tries to understand why TCF was adopted while P3P and DNT failed.

==========

This coming Monday will mark two years since the General Data Protection Regulation (GDPR) came into effect. It prompted an initial wave of cookie banners that drowned users in assertions like “We value your privacy”. Website owners hoped that collecting user consent would ensure compliance and ward off the lofty fines.

Article 6 of the GDPR describes how organisations can establish a legal basis for processing personal data. Putting aside a selection of `necessary’ reasons for doing so, data processing can only be justified by collecting the user’s consent to “the processing of his or her personal data for one or more specific purposes”. Consequently, obtaining user consent could be the difference between suffering a dizzying fine or not.

The law changed the face of the web and this post considers one aspect of the transition. Consent Management Providers (CMPs) emerged offering solutions for websites to embed. Many of these use a technical standard described in the Transparency and Consent Framework. The standard was developed by the Industry Advertising Body, who proudly claim it is is “the only GDPR consent solution built by the industry for the industry”.

All of the following studies either directly measure websites implementing this standard or explore the theoretical implications of standardising consent. The first paper looks at how the design of consent dialogues shape the consent signal sent by users. The second paper identifies disparities between the privacy preferences communicated via cookie banners and the consent signals stored by the website. The third paper uses coalitional game theory to explore which firms extract the value from consent coalitions in which websites share consent signals.

Dark Patterns after the GDPR: Scraping Consent Pop-ups and Demonstrating their Influence by Midas Nouwens, Ilaria Liccardi, Michael Veale, David Karger, and Lalana Kagal.

Nouwens et al. use a mixed-method design combining web scraping, user studies, and legal analysis to evaluate whether consent dialogues comply with the GDPR. The user study identified design choices that increase propensity to consent, such as “removing the opt-out button from the first page increases consent by 22–23 percentage points”. The authors found that many participants suggested they could not achieve their “ideal privacy setting”, which is unsurprising given the dialogues were managed by CMPs who use the rate at which they obtain consent for sales purposes.

The authors then scrape 10k websites in the UK and filter down to a sample of 680 who employ one of five CMPs considered by the study. Each site is then analysed to detect “dark patterns”, design choices which increase propensity to consent. They find that around 90% of the sites implement at least one of the three dark patterns under consideration.

Finally, the authors link these design choices back to the GDPR via legal analysis. For example, pre-ticked boxes were used in 56% of the sample even though they were ruled not to be a valid form of consent under a past law. This touches on one issue with declaring these practices “illegal”. We are yet to see judgments in a court of law over these processes with regards to the GDPR, which will come no sooner given the UK ICO has told complainants their cases won’t be investigated during lockdown.

Do Cookie Banners Respect my Choice? Measuring Legal Compliance of Banners from IAB Europe’s Transparency and Consent Framework by Cèlestin Matte, Nataliia Bielova, and Cristiana Santos

Matte et al. investigate cookie banners implementing the Internet Advertising Bureau’s Transparency and Consent Framework. Whereas AdTech vendors once operated in the shadows, the framework uses relatively open APIs to ensure advertisers can easily verify which consent signal was collected and stored by the website. The authors exploit these APIs to both: (a) send a consent signal via the dialogue; (b) check which consent signal was stored.

The study samples 30k websites across EU domains and generic domains (.eu, .org, and .com) and identifies 1,426 websites implementing the consent standard. They discover that 141 (10%) websites register positive consent before the user has even made a choice and 27 (2%) store a positive consent signal even when the user explicitly opts out. Corroborating the previous paper, they also identify many websites implementing so-called “dark patterns” like cookie banners with pre-selected options or no way to opt out.

These results are particularly concerning given the paper exclusively studies consent management providers who supposedly specialise in compliance. Further, it appears there was little thought for security when designing the system linking CMPs and vendors. On page 14, the authors observe a lack of authentication means that “consent strings can be forged by anyone”. Finally, the authors observe websites using the consent signals collected by other websites, which helps to motivate the next paper.

The Commodification of Consent by Daniel W. Woods and Rainer Böhme

The final paper, of which I am an author, explores the commodification of consent in which “a legal concept designed to empower users has been transformed into an asset that can be traded across firms”. We used coalitional game theory to consider the implications. Our results show consent coalitions create the most value for firms running large consent deficits, which describe the proportion of users who the site cannot obtain consent from, and we predict such firms will pay a fee to join. In reality, we see that the IAB charges vendors (who have no way to directly obtain consent) 1200 euros to join the Global Vendor List.

The second prediction is that obtaining consent will be a winner-takes-all market (e.g., one firm who can easily collect consent will collect all the coalition fees). Though this has not been observed in practice, we would not expect it to be widely publicised if it was happening. Further, we suggest that publishers should consider whether collecting consent for the “Global vendor list”, which empirical measurements show the majority of publishers do, is worth the reputation and legal risk that comes with weak supervision of partners (something Facebook discovered the hard way in the Cambridge Analytica scandal).

The industry has recently developed the Transparency and Consent Framework 2.0 that removes power from vendors. This is one of the paradoxes of the advertising industry; publishers are both suspicious of and reliant on AdTech firms. The development of this standard and the coalition of firms using it will be fascinating to watch going forward.

CONCLUSION

So did the GDPR change how and to what extent privacy was protected? There is empirical evidence that GDPR shifted internet firms towards obtaining consent, which is unsurprising given the potential fines. However, the devil is in the details regarding how consent was obtained. Nouwens et al. argue that consent dialogues are being designed to shift users towards providing consent. Matte et al. go further and provide evidence that some websites simply ignore the consent preferences expressed in a dialogue. Finally, our paper suggests firms will instead purchase consent via coalitions. In all cases, it is unclear whether users gained meaningful control over their personal data and it may well be 10 years before we know the answer.