Conference report – Computer, Privacy and Data Protection (CPDP) Brussels 22nd-24 January 2020

Last week I attended the ‘Computers, Privacy and Data Protection’ annual conference in Brussels. It consisted of an intense programme of 84 panels over just 3 days. Five panels ran concurrently in separate venues and the high calibre panellists included intellectual heavyweights and privacy crusaders. It was a ‘who’s who’ of the privacy world featuring names such as Ravi Naik, Carole Cadwalladr, Pat Walshe and Max Schrems. Those in attendance comprised practitioners, theorists, academics and activists, not to mention industry reps from Microsoft, Mozilla and Google amongst others. Every aspect of data privacy was covered from website registry to children’s rights, surveillance, and adtech. 

Opposing views

In the opening remarks, the organisers said that they made a conscious decision to keep the panels diverse and they were true to their word. The panellists were well mixed in respect of academic disciplines, cultures, industries, gender and nationalities. 

It was refreshing to hear a variety of opinions, for example, on a panel discussing AI, police from Belgium and Denmark welcomed the benefits of AI for crime prevention, and more worryingly, crime prediction. They defended the use by saying it was not used on a personal level, but a societal level. Prof Rosamunde Van Brakel of Vrije University Brussels argued against it due to her research having produced evidence of inbuilt bias and ethic profiling, such as arrest rates of certain groups of people.

Another panel on ethical adtech organised by Open Rights Group, was valiantly attended by Nicola Cain of RPC Lawyers who work with adtech companies. The majority in the room were in consensus on the motion that real time bidding should be illegal, that it deprives users of their right to privacy. The whole audience was on tenterhooks, however, Ms Cain was not heckled but politely told she was wrong, and how, and in what ways she was wrong. It generated a certain frisson in the room.

Unfortunately, there were sotto voce grumbles and complaints on social media about companies with atrocious records on privacy, sponsoring events. For example, a panel titled “Unlocking Societal Benefits of AI with Privacy Protective Technology“ was sponsored by Google. The participants were: a moderator who worked for Google, trade association representative who was funded by Google, think tank representative who was funded by Google, a privacy expert employed by Google, an academic who was appointed to Google’s advisory council. The only speaker not connected with Google, was representing the European Data Protection Supervisor, who was in any event, restricted in what he could say.

Privacy by Design

Another recurring theme across the panels was privacy by design. It was agreed that regulation of AI is not enough to protect rights, and privacy should be considered from the very inception of any online product or service.

For instance, it was argued that apps, rather than incorporating privacy by design, were consciously ‘broken by design and by default’ by the tech industry to facilitate their surveillance and manipulation tactics. Helen Dixon said that retro-fitting apps for privacy by design is painstaking and not sufficient. Nor will conducting a privacy impact assessment retrospectively suffice. Privacy by design should be integral to every piece of software and that entails teaching privacy ethics and legislation to engineering and IT students in every university. Universities should give them a thorough grounding in privacy and the consequences of not embedding it in their products. Companies need to engineer products from inception for data-minimisation. Julie E Cohen correctly pointed out that people made this technology, so people can fix it. 

Transparency

Many contributors called on companies to increase their transparency. Pat Walshe of Privacy Matters observed that the rules around transparency have been around for decades, so how are companies like Netflix allowed to carry out covert surveillance? These companies know every intimate detail about us, while we know nothing about them. Opacity is built into their modus operandi. Netflix for example, can set testing cookies on your device which allows their labs to collect your data for their research purposes. They record our moods, our feelings and our behaviour. Panoptykon, the Polish NGO, recently published a report on adtech companies. In the course of their research they discovered that 90% of websites use dark patterns (tricks to manipulate users into taking steps that are advantageous to the company) thus stripping the user of their agency.

Lokke Moerel called for transparency in privacy policies – she said a privacy policy should be idiot proof and readable by a child. With regard to breach notifications, they should contain no legal language. They should be clear, simple and disclose all information. Mozilla’s research showed if companies hold back information about a breach which is exposed later, it is more detrimental to their reputation in the long run. 

Consent model is no longer fit or purpose

An observation that cropped up again and again from various quarters was the consent model was no longer viable. The general public do not read cookie policies nor privacy policies and even if they attempted to, the app companies make it a virtually impossible, Herculean task. It’s even difficult to know what constitutes proper consent, if people don’t understand what they are consenting to. It’s not effective and it’s certainly not informed consent. People are given so much control that they “choke on it” – meaning, they are expected to read the policies of every single website they visit. This was excellently illustrated by Cameron Russel of Western Union. He used the analogy of someone getting food poisoning from a snack they bought. Imagine if the victim complained and was told “too bad if you got poisoned. It’s nothing to do with us – all the ingredients were listed on the packet so it’s your responsibility”.

This is particularly relevant to children’s rights. Dr Eva Lievens’ research showed in terms of children’s rights, the concept of consent doesn’t work because the adtech companies draft the privacy policies crammed with legalese and jargon making them totally obscure and incomprehensible to children. She called for the “de-responsibilisation” of children and parents. It was argued that such a strategy would defeat the sole purpose of GDPR, which was supposed to empower people to take control of their own data. She counterargued that, rather than removing autonomy from children or parents, she was demanding additional protection for them, as those groups were vulnerable in terms of being able to defend themselves against surveillance and manipulation.  

On another panel the Norwegian Consumer Council said the whole business model needs to change because consumers have no idea what’s happening to their data and shouldn’t be responsible for its protection. Julie Brill of Microsoft, claimed it was a huge mistake to make consumers responsible in the first instance. Regulators should instead strengthen the duty of care, loyalty and confidentiality owed by companies to consumers. 

Prof Neil Richards expressed the situation thusly; “consent and choice – Bo*****s! Consent and choice gave us Trump and Brexit and the potential dissolution of western democracy.”

Conclusion 

It was abundantly clear that the conference was brimming with privacy devotees who cared deeply about human rights and social justice. But one astute participant asked why were activists, such as Max Schrems, policing the system, when regulators should be at the forefront? Stricter regulation is desperately needed and data protection authorities should be properly funded.

Wojciech Wiewiórowski, European Data Protection Supervisor, suggested a new strategy for the future called RATS (‘R’ reliable solutions, ‘A’ action has to be taken against offenders, ‘T’ tools to handle new technology, and ‘S’ sustainable solutions for society and the environment). What we have now is not sustainable. 

If any solutions are presented, they need to be implemented soon, as it looks like it will be another tumultuous year in terms of privacy and data protection. Many legislative changes are on the way, each one with wide-ranging consequences. The UK has left the EU and subsequently implemented “UK GDPR”, and the UK data protection regime may change again depending on whether they can strike a deal with the EU. Enforcement of the California Consumer Privacy Act commences on 1st June, so it will be interesting to see how that plays out. All eyes will be on Ireland too, as their Data Protection Commissioner interprets the results on Max Schrems’ 7-year battle with Facebook, and they conclude their investigations into the largest social media companies. The ongoing competition between the Democrats’ ‘Consumer Rights Privacy Act’ (COPRA) and Republicans ‘United States Consumer Data Privacy Act’ (US CPDA) will no doubt rumble on. In India, The Personal Data Protection Bill may have huge implications, both negative and positive, for privacy and personal data.  

With regard to data-driven elections, the US presidential election will be fascinating to watch. Twitter claim they will ban political ads including paid tweets by candidates and political issues. Google have banned political micro-targeting and placed restrictions on political advertising. Whereas Facebook have vowed to remove deep fakes but not political targeting or disinformation. We shall find out if their motto of “move fast and break things” includes breaking democracy.

Jennifer is an organiser for ORG Scotland and has a masters in Gender, Conflict and Human Rights.