April 13 2023 GM

From TCU Wiki
Glitter Meetups

Glitter Meetup is the weekly town hall of the Internet Freedom community at the IF Square on the TCU Mattermost, at 9am EDT / 1pm UTC. Do you need an invite? Learn how to get one here.

Understanding (ALPR) as a form of algorithmic surveillance

Our featured guest's research engages in critical studies of data, algorithms, and digital infrastructures, particularly those of computer vision. Gabriel will share with community the findings of his most recent research about Automated License Plate Recognition (ALPR) as a form of algorithmic surveillance—from its creation and impacts to forms of resistance to its impacts.

Gabriel Pereira (@gabrielpereira on Mattermost) is an Independent Research Fund Denmark International Postdoc, based as a Visiting Fellow at the London School of Economics and Political Science (UK). Gabriel has a PhD on Information Science from Aarhus University (Denmark).

Notes

Can you tell us a bit more about what Automated License Plate Recognition (ALPR) is and how it works?
  • Thanks, Islam for the question! Automated Number/License Plate Readers (ANPR/ALPR) are an application of computer vision, used for automatically reading and recording the number plates of passing cars. Such systems are increasingly common all across the world, used for example in data-driven policing, environmental regulation, and "seamless" parking. The data produced through these operations can be aggregated and analyzed at scale, raising important questions about surveillance and personal data protection. The UK is widely regarded to be the origin place of ANPR systems and the largest exporters of these technologies.
  • One of the most problematic uses of ALPR is for policing, where ALPR can lead to different forms of algorithmic violence. Key concerns of mine are function creep (the broadening of the use of a surveillance technology once it is implemented); as well as the overinvestment in surveillance tech (instead of other, better uses of public money).
  • I've written a research article on the Denmark landscape and have been now broadening the research to the UK and Brazil (as well as other smaller case studies in other countries). Would love to hear any stories or perceptions you may have of the use of this technology elsewhere, as well as any more specific questions of how it works.
What motivated you to study ALPR as a form of algorithmic surveillance? How did you first get interested in working on this?
  • Definitely, a key problem with it is how it could be used against activists. What's even more scary is that we may not even know how it's in fact being use.
  • I think I'm very interested in ALPR because it is a very banal surveillance technology. Whereas there seems to be more and more activism and research around facial recognition, I've seen very little being talked about ALPR. In fact, most people don't even know what it is, or may not even completely understand the problems such technology may bring. As you mention there, although it may have problematic uses, it is rare we hear about it, or that activists discuss the impact of such tech for their work. However, there is increasing amounts of investment in ALPR, with countries that didn't have almost any cameras 5 or 10 years ago now being completely covered and tracked (e.g. Denmark).
  • I mean, we may see the camera on the street, like in the image above... But seeing the camera doesn't mean seeing the infrastructure behind it, how the data is captured, shared, and which kinds of decisions it leads to.
  • Even for researchers and activists, it may be hard to explain what is problematic with such technologies, also because there's so little written about them and public debate. But when things like this become infrastructure, they're even harder to oppose.
Do you know of any civil society advocacy campaigns/awareness raising that have focused on ALRPs?
  • There are few people working directly in relation to ALPR, but they exist. Most are smaller-scale projects. One example, similar to what you show there, is coveillance.org, which does guided tours to surveillance infrastructure.
  • A key example for me is ANPG, which is a collaboration between individual activists-hackers in Denmark, where they crowdsource the location of ALPR cameras in the country. A really small but crucial intervention.
  • In the UK, there have been different campaigns, like Crowd Justice campaign
  • In Brazil, I haven't found anything very specific, besides journalists heroically writing about it. But there are campaigns around facial recognition. Tire Meu Rosto Da Sua Mira is a great example and here's a crucial journalistic response to ALPR in BRazil, one of the few. And more recently, they're doing a whole series on facial recognition
What did you find to be the main impacts of ALPR on individuals and communities? Are there any examples that come to mind of how it was used?
  • I must preface by saying that, as far as I have found, there is no extensive empirical research about how this directly impacts communities or individuals. Part of the problem is that this is something which is difficult to measure, as the impacts of a surveillance technology like this may not be felt for quite some time, and its impacts are also naturally complex and multifaceted.
  • With that in mind, you've listed a few issues. For example: We have different reports of how ALPR can be used for over policing and over surveilling marginalized communities. ALPR can also lead to many false positives, thus causing terrible consequences for those stopped (often by reading errors of the machine, or problems in the database). One recent example of this seems to be the case where the Met Police killed Chris Kaba, a black man, because the car he was driving was flagged by ALPR as associated with a weapon. Of course racism played a role in this, but it was initiated by the use of the surveillance technology. Additionally, it is important to look at how ALPR collects data of people, and how this data is already being used for oversurveil marginalized communities.
  • Moving on to a higher level, ALPR can also be used for tracking activists, as you mentioned, but it is naturally hard to find reports of anything related to this. In Brazil, the same Federal government agency that was responsible for ALPR also produced anti-left-wing reports, so you can imagine the issue there. Could they also use the data for something else in the future? It's difficult to know, but definitely likely considering the lack of any regulatory system protecting against this expansion. Moreover, in the past weeks it emerged that organized crime had accessed the ALPR system to track the movement of cars of people they were staking for assassinations. Pretty problematic stuff, and cyber security risks naturally emerge once more and more data is collected.
  • On an even higher level, perhaps, is the question: which kind of policing do we actually want? I've been talking to some groups in the USA, where ALPR is being proposed as a solution to a rise in gun crime. However, there is nothing indicating that ALPR is actually useful for reducing gun crime. In fact, much surveillance studies literature questions this! So these groups are working with local councils to instead consider a public health approach to ALPR, which considers how to actually support communities' issues rather than putting more and more surveillance on them. We also know for a fact that oversurveilled communities may lead to an increase in police response, leading to a terrible feedback loop... especially for marginalized communities once again... But it's ofc difficult to completely know how this will play out beforehand, ofc.
  • So, a big question for me continues to be: How do you measure or track any of this? How do we communicate it to people, and what kind of resistance is possible? Especially considering governments and police departments don't share as much info as they should, it's difficult to keep track and actually know long term impacts.  For example, a hard but important question to ask is “how is ALPR actually changing how the Police operate?” The book "Predict and Surveil" by Sarah Brayne gives some responses on this, as she studies the LAPD: the Police is collecting more and more data, and integrating it in many new ways. But the actual impacts of this will often only be felt by marginalized communities, while the Police continue to implement more and more and more tech as a solution to the problems it creates.
  • Btw, a question which I'm sure will interest this group: much is written about these surveillance technologies in the US and Europe, but they're being exported to other places, where they're often implemented in ways that are even more surveillance-intensive. There's often little talked about this in academia, and the impacts it creates is also uncertain.
You've shared diff examples of resistance, so keeping it optimistic, do you have any recommendations for example? especially in contexts where this conversation is not being held at all, like how do we check banal surveillance around us etc?
  • I think that's important, and also for us to always consider the actual impact of these technologies and the regulatory oversight around them. For example, the cameras implemented for environmental zoning in Denmark are designed for privacy, and don't collect data on cars that are passing (as long as they are listed as not polluting in the database). That's protected by law, and thus very different than in the UK, where the data from environmental zoning is now being shared directly wiht the police.
Have you (or someone else) identified how the system behind this form of algorithmic surveillance works? Like, to what other systems is connected, how much data is actually collecting, for how long and who has access to it? And how can it be freely used by abusive authorities?
  • We did indeed try in our article here to begin listing the different elements of this infrastructure.
  • A bit high level still, but we list and detail four common components: (1) a camera, (2) an algorithmic processing unit, (3) on-site/remote data storage, and (4) interfaces to different analytics solutions.
  • I'm hoping to write much more about the details on each of these in the future. For example, there are different techniques for detecting letters and these may matter a lot. Also, the location of the database where the data is stored can make a lot of difference for how function creep happens in the future.
As it's very difficult for the State to stop using it, can we advocate for using alternative ways or frameworks in a more secure way? How do you envision these alternatives?
  • So far, my feeling is that there are many different ways, and each of them could be important in different ways. There ARE ways to make this technology privacy-preserving, particularly for parking and environmental zoning. I don't think that's enough, though. There is important work being done in stopping the use of such tech, especially in local communities and surveillance oversight boards. Is that enough? I also don't think so. There's also activism and prosecution to reduce police use of databases, but that is often very hard, especially in some countries. Also the work in making these cameras visible and the journalism around them. In sum, there's a long fight needed in many fronts, and it'd be difficult for me to point to one path I think is the way to go besides more consciousness raising?
  • You may also be interest in an activist group I organize with, No Tech for Tyrants, we recently published a report on police abuse of surveillance tech.
A participant adds: the fact that is so difficult to track is in itself an important finding. Not being able to measure these effects properly raises many concerns in the use of this kind of technology, especially if we take into account what we do know: that governments tend to abuse their data power, that marginalized communities are not protected, but surveilled, and that data breaches are the sword of Damocles of anybody anywhere.
  • And Gabriel answers: indeed, the fact it's made into something unspoken is crucial to understand it. In reality, the government of Denmark and the UK (which I know more about) refuse to say anything about where these cameras are located and to give much info about how they are used. This is intentional, and not a bug. That's why activists had to crowdsource the location of the cameras, so they could better understand, track and resist. But it's really interesting because now when I interview officers at the Danish Police they say the location of the cameras is available but I respond: yeah, because activists went above and beyond to track these cameras.
How do they react to that?
  • They institutionally fight back against activists tracking the cameras. But I think it's interesting to understand every Police also as a multifaceted institution. Some people inside the Police itself may not actually want more surveillance, for many reasons. Or may be doubtful about the impact of surveillance on the work they do. So what I mean is, though the Police as a whole may be pushing for ALPR (from the top-down), there may be people inside it who may disagree with this increased data collection.
  • Here's a video from an activist in Denmark talking about the process of constructing the map in anpg.dk
  • I do want to make it clear, though: I'm very aware Denmark is a very particular, very financially rich country with a long history of trust in the government so we shouldn't take it as a model for (most) other places.