March 9 2023 GM
Glitter Meetup is the weekly town hall of the Internet Freedom community at the IF Square on the TCU Mattermost, at 9am EST / 2pm UTC. Do you need an invite? Learn how to get one here.
- Date: Thursday, March 9
- Time: 9am EST / 2pm UTC
- Who: Amber Sinha & Arindrajit Basu
- Moderator: Mardiya
- Where: On TCU Mattermost "IF Square" Channel.
- Don't have an account to the TCU Mattermost? you can request one following the directions here.
Can Groups Exercise Rights Over Data?
Data is no longer collected only about one individual or even a small group of individuals but increasingly about large, often undefined groups. Data is then transformed through multiple layers of algorithmic processing into patterns and group profiles that are applied on a macro-scale. While the individual may not be the focal point of algorithmic processing and its derivative value, fundamental rights, including the right to privacy remain vested in individuals as a fundamental tenet of any democracy.
We will engage with the relationship between groups and the rights they hold, the framework within which it can exert rights over data, and the relationship between groups and its individuals as well as external actors. We will also shed light on algorithmically determined groups where individuals may not even be aware of their membership, and explore how groups could still serve as a unit of harm mitigation.
- Amber Sinha works at the intersection of law, technology and society, and studies the impact of digital technologies on socio-political processes and structures. His research aims to further the discourse on regulatory practices around internet, technology, and society. He is currently a Senior Fellow-Trustworthy AI at Mozilla Foundation studying models for algorithmic transparency, and the Director of Research at Pollicy.
- Arindrajit Basu is a researcher covering geopolitics, constitutional law, and technology. He is affiliated with the Centre for Internet & Society as a Non Resident Fellow. He is a lawyer by training.
- Hello, I am Amber (@ambersinha on Mattermost). I am a researcher working in the intersection of law, technology and society. I am currently a Senior Fellow at Mozilla and Director of Research at Pollicy. Arindrajit and I are hoping to share reflections from our research on group data rights.
- I'm Arindrajit (@arindrajit on Mattermost), a researcher affiliated as a nonresident fellow with the Centre for Internet&Society (where Amber and I were colleagues!!)My research focusses broadly on data governance (on which Ive had the privilege of writing with Amber) and the global governance of technology. As of last week, I started external PhD candidate at Leiden University's Faculty of Global Governance and Affairs. Look forward to the discussion.
- So, in the last three years, there has been excitement about the ‘data as an asset’ discourse globally and ‘community data’ in India, without clarity on how a community can exercise rights over data. Rejecting the use of new terminology, we use the term group rights instead.
- We begin by asking— What is a group? A group could be self-aware and come together through a common identity— what the French call conglomerate collectivity. A group could also be individuals who are brought together by factors other than a common identity—aggregate collectivity.
- In our paper, we consider some tricky situations involving examples like SalusCopp and Ciitizen, and conflicts between individual and group rights and recommend that a right to opt-out should vest with the individual at all stages of the data cycle.
- We highlight that when collectivising value, hierarchical arrangements tend to set in where a subset can end up taking decisions that can harm an individual. Therefore, clear mechanisms to protect individual rights are needed. Our paper is an attempt to recognise this critique and ask the next question, how can this relational approach (which we see arising from individual identities being drawn from being part of groups) translate into the exercise of power by groups.
- The 'community' based approach has gained currency as a way to address inequities of power and for communities and collectives to exercise power over their data. However, it remains a tricky terrain and our paper is an attempt to ask some fundamental questions about how to navigate it.
- First, what makes up a 'self-aware group?' Mostly drawn from constitutional law and doctrinal scholarship on groups. The fourth criteria we added, specifically keeping modern day data processing in mind:
- Individual members must perceive themselves to be normatively bound to each other or to have collective interests that they share with the group
- The group is working in adherence with, and for the advancement of, this shared normative understanding which could include decision-making processes, membership rules, adjudication mechanisms and other factors integral to the enforcement of rights or enjoyment of participatory goods.
- Individual members of the group pooling their collective interests believe that doing so will lead to value aggregation either in the enforcement of rights or in the enjoyment of participatory goods.
- Free and informed consent from individuals forming part of the group.
- In some cases,"free and informed" may be more possible through a collective rather than through an individual simply because the collective is more powerful as against the processor. So, an individual would consent to their data being pooled as part of a collective and the collective would work with individuals to make decisions about how their data is processed. The right still vests in the individual but the collective is a means of enforcing that right
- Second, what happens if there is a conflict between an individual and the larger group? (Like say a health data cooperative decides they want to share a set of anonymized data with a certain research company which would include Individual A's data as well but individual A does not want to share their data with said company) Our solution again drawing from the vesting of rights in individuals is that the individual's choice would prevail and they would have a right to opt out
- Third, what about groups that are formed algorithmically? That is the individuals don't even know that they are part of a group as that is driven by classification. Our initial answer is to have meaningful transparency rights. For example Google given all its faults does enable one to see how one is being classified on factors such as household income, interests, parental status. This is definitely not enough but a good start. There should be options to correct for classifications and also opt out. This is of course a bit theoretical and may be difficult to apply given power dynamic
- Google messed mine up a bit. One of my interests was "American Football" whereas I watch the other kind of football so definite error there
- And finally what are the duties of data processors who classify people into groups?
Do you have an idea of how/to whom your research will be disseminated - policymakers, lawyers, scholars, digital rights groups, etc - and how they might use it?
- We wrote this paper as part of an edited volume on data governance. The compilation is largely focused on India, though our paper looks at this issue more broadly. While it was designed around the discourse in India, and consequently for policymakers, scholars, activists in India, the ideas do have relevance in several other contexts too.
- In our paper specifically, we propose some principles which are useful for emerging collectives and coops in how they design themselves
- Our research is also built on and a response to similar movements. I would consider data stewardship researchers, advocates and practitioners the direct intended audience for our work.
- It would be interesting to see some of the principles applied in practice. So if this can feed into theMozilla's stewardship work for example or actually be piloted by a data collective, that would be very interesting to see if it actually works, or not!
- So we highlight that when collectivizing value, hierarchical arrangements tend to set in where a subset can end up taking decisions that can harm an individual. Therefore, clear mechanisms to protect individual rights are needed.
- For algorithmically determined collectives, the first challenge that individuals face is a lack of awareness about classification decisions about them. The most obvious risk of harm that arises from such classification is that of discrimination. We look at post facto adequation as well as integrative tools to build transparency and build awareness about algorithmic decisions turning on the basis of perceived membership of a group as critical to understanding how decisions are made.
- We also look at emerging data governance legislation and policies, and feel that rather than focusing on how mandatory data sharing regimes can be created, any regulatory proposal needs to first consider the ethical basis for such mandates, and how protection will be built in.
What is our role as digital rights, justice, security and public interest groups in ensuring that the democratic and community approach to data governance is adequately implemented?
- IMHO the digital rights groups have not (are not) adequately engaged with data governance models, esp those involving new innovations. Digital rights groups are critical in both contributing to the growth of these models and holding them to account, so we need to engage with these questions more deeply.
- Let us weigh in a bit on the importance of independent research. Narratives are usually shaped by those with power/deep pockets so some states and some companies. Our understanding of the world and the internet usually gets coloured by these narratives. That's where independent, impartial research becomes important.
- And this research often requires rigor to point out the flaws in dominant narratives- data as 'oil'/data as property. etc
- So it's always nice to be able to discuss and exchange research and ideas on forums such as this and get feedback
- And sometimes, esp. people from the pvt. sectors are often willing to engage with some of this research so we’ve always thought of a paper or an oped as a starting point to start a conversation, even if the idea is unrealistic or flawed.
And how can the community reach you?
- Arindrajit: firstname.lastname@example.org
- Amber: email@example.com