Privacy &: The Human Rights Impact of Algorithmic Profiling

Aaina Agarwal
3 min readSep 4, 2020

--

Content delivery based on predictive profiling has interconnected human rights implications when it comes to both privacy and freedom of expression, in part because privacy is deemed essential for realizing freedom of expression in human rights discourse. The idea of “privacy” is meant to provide people with a space to determine their own identities. When this space is intruded upon by algorithms that use a profile to determine what we see, it limits our cognitive autonomy to construct how we think and feel. This directly impacts several rights that are guaranteed by the human rights system, including the rights to freedom of thought and freedom of opinion, with additional consequences for non-discrimination.

The autonomy piece is the link between privacy and freedom of expression. The guarantee of a private sphere is meant to safeguard autonomy, which is critical as the foundation for being able to freely develop and exchange ideas. Personalized algorithmic persuasion encroaches upon this private sphere with cascading impacts for privacy and freedom of expression in multiple ways. For example, targeting media and political content minimizes exposure to diverse views, which interferes with the agency to seek and share opinions across societal divides. It is not only that we are predictively exposed to narrow stream of content that leverages our predisposition to reinforce existing identity, but also that this process cognitively isolates us from others in a manner that limits our ability to continue testing our views in autonomous and on-going construction of our identity.

This can also present human rights issues with respect to discrimination. Profiling to direct exposure limits outcomes based on historical patterns, which has an equal likelihood of reinforcing discrimination. In the US, anti-discrimination law takes the forms of either “disparate impact “or “disparate treatment.” Most practices that are called into question fall under the first category in that they are facially neutral, or applied to everyone equally in practice yet lead to outcomes that sort people differentially based on categories (i.e. race, gender, socio-economic status).

The impact of targeted content delivery will function to limit opportunity variably depending on education, socio-economic status and the opportunities that result therefrom. For example, people who can critically understand how these systems operate to influence decision-making are in a more resourced position to insulate themselves from the outcomes. If the impact of algorithmic persuasion operates to support a more significant disadvantage for marginalized groups, this speaks to the right to non-discrimination that characterizes the guarantee of all rights under human rights law and would potentially support a disparate impact claim under US Law.

The analysis above is focused on how predictive algorithms impact our own decision-making ability and the resulting implications for human rights. This is setting aside the human rights harms that may result when others use algorithmic profiling to make decisions that impact our livelihood and opportunities — for example in the domains of policing, criminal justice, hiring and housing. It highlights the need for a regulatory framework that considers algorithmic profiling beyond privacy and data protection and as a threat to individual autonomy and its resulting implications for freedom of expression and discrimination.

--

--