Last November I attended one event connecting to crucial aspects of Object-Oriented Subject’s research, namely the exploration of the relationships of power and control embedded in algorithms. This event was a lecture by Cathy O’Neil in Utrecht, given on the 21st of November. As some interesting insights were gathered during this occasion, especially concerning a data scientist’s perspective on algorithm design and application, I would like to write a synopsis of the most relevant points raised by Cathy O’Neil.
Heavily based on her 2016 book of the same name, Cathy O’Neil’s “Weapons of Math Destruction” lecture guided its attendants on a journey through algorithmic decision-making, starting with the basic building block of what is an algorithm, clarifying what defines a Weapon of Math Destruction – illustrated by concrete, real life instances of WMD algorithmic governance and its more-often-than-not-disastrous consequences – and finishing off with proposals to counteract WMD power.
While algorithms are often sold as neutral, unbiased tools, Cathy’s breakdown of what an algorithm is intends to dispel such claims. According to Cathy, an algorithm is a conjugation between historical information and a certain, subjective definition of success. To make her case even further, Cathy compared an algorithm to preparing dinner for her kids. She starts with a set of curated ingredients (data) that were already chosen on the basis of HER definition of success – feeding her kids vegetables – which, more likely than not, is not the same as her kids definition of success. Thus, another important factor of what constitutes an algorithm comes to light: whoever has the power to curate the data and determine what success looks like is the one imposing the agenda. Algorithms are opinions embedded in code, not pure mathematical abstraction with no position or politics. Whilst this may seem to many a self-evident truth, it is nonetheless an important insight that, being one of its backbones, Object-Oriented Subject also hopes to clearly communicate.
What the author coins as a Weapon of Math Destruction appears within this context: according to Cathy O’Neil, a Weapon of Math Destruction is an ill-conceived mathematical model (lacking sufficient amount of samples and/or feedback) which is widespread, mysterious and destructive. Being widespread it is resistant to nuance, being mysterious it is resistant to accountability, discussion and rebuttal, and being destructive it is often used to identify, exclude and punish, generating a negative feedback loop at a societal level.
Cathy then went on to give some examples of actually existing Weapons of Math Destruction. One of the examples given concerned the education achievement gap in America, which is largely a class problem. However, as it would be expected, this was ignored by the American government in favour of placing the blame on bad teaching. With this another problem rose: how to define what a bad teacher is? The solution found by the government was to attribute every child a personalized expected score and hold the teachers accountable for students who did not achieve theirs. With teachers getting fired, many desired to know what was the model behind the expected score system, a desire which was promptly denied and dismissed by claiming the fair, neutral and overall inaccessible nature of Mathematics. These claims represent what Cathy terms as the weaponization of math.
Plenty other examples were given during the talk concerning job hiring processes, predictive policy and targeted advertising. The latter, Cathy states, separates people on the Internet by race, gender, class, ignorance, etc, and it should be read within a power frame, not a privacy one. It has real-life consequences for the most vulnerable people online, such as enticing them to join bullshit universities, try bullshit ways to make easy money, etc. In this chapter, Cathy also mentioned the opaqueness of online political campaigning: given the option of targeting their ads, campaign managers will make use of such a tool to put focus on different issues for different people, meaning that there’s no way for people of different target groups to know of the whole campaign. These tools for manipulation of the public opinion thus become all the more insidious for their lack of transparency and accessibility.
It is important to note that at least at the time of writing of “Weapons of Math Destruction”, and while acknowledging its potential for abuse, Cathy didn’t consider Facebook as a WMD yet.
Cathy finished her lecture off by providing a list of possible steps that could be taken to regulate algorithmic governance, such as the pledging of a sort of Hippocratic Oath by data scientists, the independent auditing of algorithms, the creation of a citizen’s movement, etc.
The audience then proceeded to raise their questions. There seemed to be a belief on the part of some members of the audience that such abuses as the ones described by Cathy were far off realities, not like most we find in Europe.
In my understanding, even if in terms of scale the north American-specific situations described by Cathy are, indeed, seldom found in Europe, it is by no means such a distant reality – algorithmic decision making is being effected with arguable success and less-arguable consequences around these parts. Let us take the example of the Polish profiling of the unemployed. Moreover, aren’t most of us in some form of corporate social media? Thus, those of us who are can still fall victim to the vicissitudes of targeted advertising and campaigning.
The GDPR (General Data Protection Regulation), which replaces the Data Protection Directive 95/46/EC, and will be entering into force in May 2018, was then brought up by another member of the audience as a possible safeguard against these type of situations. According to the GDPR official website:
“The GDPR not only applies to organisations located within the EU but it will also apply to organisations located outside of the EU if they offer goods or services to, or monitor the behaviour of, EU data subjects. It applies to all companies processing and holding the personal data of data subjects residing in the European Union, regardless of the company’s location.”
According to a piece written by Jędrzej Niklas, entitled “The Regulatory Future of Algorithms”, there are however many experts raising concerns and calling for further actions to be taken on account of data regulation. Some of these actions echo Cathy’s closing proposals. Among these experts are Heiko Maas, the German Minister of Justice, who calls for a new law to prevent automated discrimination by forcing companies to establish more transparency when algorithmic methods are concerned; experts from the Oxford Internet Institute, who are proposing a separate right for people to be able to receive an explanation on how algorithms affecting their lives work and how certain decisions were made; Ben Schneiderman, a professor at the University of Maryland, who advocates for the creation of a special regulatory agency to audit algorithms and Danielle Citron and Frank Pasquale, which defend that algorithms used for public purposes should be independently audited regarding their potential for discrimination.
On the topic of actual enforcement of the GDPR, a new European privacy enforcement non-profit organization, noyb – European Center for Digital Rights, claims to be the “first European organization to make use of this new law”. noyb’s goal is to engage with projects that collectively enforce the privacy rights of European citizens. One of noyb’s founders is Max Schrems, the Austrian lawyer and activist behind Europe vs Facebook and the class action against Facebook filed in Austria against Facebook Ireland. With these projects Schrems attempted, among other things, to answer the question “Are EU Data Protection Laws enforceable in practice?”. While for this specific class action existing consumer rights laws had to be used to build its legal basis, Scherms claims that the GDPR will allow for such cases to be brought under a “more precise legal basis”.
It remains to be seen what changes effectively take place once the GDPR comes into full force, but repeating and underlining what Cathy said, algorithms should be analyzed within a power frame; thus, it doesn’t suffice to engage with them on a privacy level only. While these amendements and changes will hopefully work on the level of policy, and may indeed more immediately benefit otherwise abused data subjects, they are bandages, not permanent fixes. Permanent fixes, if there will ever come a time for them, will have to be engaged on a deeper economic and systemic level.