May 22 2025 GM

From TCU Wiki

Safety & Responsibility in AI-driven Applications in War and Armed Conflict

Glitter Meetups

Join Sarra, an AI activist who will be sharing about :

  • The types of AI applications in the military domain during war or armed conflict
  • The key actors involved in AI governance in military applications
  • Recommended safety measures for responsible use of AI in war/armed conflict operations

Sarra is a dedicated AI activist and research enthusiast with a strong passion for tech policy, addressing unethical applications particularly in border security, war and armed conflict contexts. Through research, conflict analysis and human-centric tech/AI policy analysis, she draws on extensive knowledge and experience in the MENA region & Africa. Sarra is currently a member of the Coalition for Independent Technology and a Youth Advisory Board member within the AU-EU Youth Voices Lab.

What is Glitter Meetup?

Glitter Meetup is the weekly town hall of the digital rights and Internet Freedom community at the IF Square on the TCU Mattermost, at 9am EDT / 2pm UTC. It is a text-based chat where digital rights defenders can share regional and project updates, expertise, ask questions, and connect with others from all over the world! Do you need an invite? Learn how to get one here.

Notes

Can you introduce the work you are doing on AI safety and responsibility? What brought you to this work? Why did you start looking into this?
  • I have actually emerged in the field of AI & machine learning academically first during my Masters. Then, having worked in humanitarian organizations and humanitarian think tanks, specifically on conflict-affected countries (I worked especially on Africa and the Middle East), I knew how important the intersection was between tech and humanity / AI and safety. I started looking more into, particularly AI responsibility in these humanitarian/security contexts because I noticed that it wasn't tackled as much and I wanted to dare!
  • First, I was working on developing machine learning tools for humanitarianism but now I'm more focused on tech/AI policy research in conflict contexts
What about the research you are currently conducting? Can you share a short overview on whats its about? Like key issues, cases and arguments etc?
  • I have recently published a policy digest regarding Africa's POV on lethal autonomous weapons systems and the role of the African commission on human and peoples rights (ACHPR) but right now I'm investing more time in more detailed research such as the following projects I have ongoing:
    • Tech applications in Italy (EU)'s border control at the Meditternean Sea and implications on African migrants arriving from Tunisia
    • AI-powered weapons systems applied in the war in Gaza and Lebanon and the systemic & strategic approach of algorithmic warfare
  • For previously published work, you can go through the "published" section on my website: https://sarrahannachi.crd.co/
Can you share what are some of the technologies that are being militarised? How do they use them, for what? Where are they tested?
  • The key technologies that are now being militarized, given multiple factors (especially the AI market) are mostly hardware, especially:
    • Surveillance drones / targeting drones / or both
    • Unmanned Ground Vehicles and Unmanned Aerial Vehicles which have now upgraded to autonomous UGVs and UAVs
    • 'killer robots' that are now the combination of UGVs/UAVs 's mobility + the drones' force engagement capabilities
  • It's important to know that sometimes they can be Lethal Autonomous Weapons Systems (LAWS), but sometimes semi-autonomous
What are UGVs/UAVs?
  • Unmanned Ground Vehicles (UGVs) are remote vehicles or vessels that are used in the military (especially in mine-risky fields) to approach the enemy/target with least damage to the military unit. Meanwhile UAVs are the same but for aerial space and territory. The difference between UAVs and drones is that UAVs are most of the time autonomous in mobility while drones are sometimes equipped by a human pilot (either remote or autonomous)
How these are used?How are they using the surveillance drones an where?And the ground vehicles? And the killer robots! Do they work in the way storm troopers work?
  • These Weapons Systems are mostly used for:
    • surveillance (eg. thermal/sattelite immagery connected to the drones' data, )
    • target identification: facial recognition by comparing what is seen on drones/cctvs, etc. with an existing database or with the model's training dataset (hence, the risk of data bias)
    • target engagement: killer robots and drones have the capacity to engage with force because they are equipped with machine guns/missles
    • Decision support/making processes: Predictive models directing military commanders
  • The selection happens based on the AI market that is being deployed in existing conflict/wars. So, unfortunately, some tech giants are testing their AI products/services in the actual wars instead of allocating an entire testing & evaluation process (with an independent entity) before deployment

on what part AI is used for the those things (esp. surveillance drones)?

Are these tools' AI applications particularly sophisticated/well-tested/adequate for these use-cases, or are they also quite fallible? Is some degree being tolerated by the users?
  • The problem with today's private sector - military contracts is that speed and numbers are a priority over precision. So, little time and money is invested in testing and governance unlike MASS development and deployment. Outcome amplification of an AI model is now perceived as a good thing by a military commander with little time and a stressful security problem. Hence, the decision making process of a military commander/unit is still affected even if the AI used is not fully autonomous!
A lot of tech started out as military technology before being deployed as consumer tech. Is this the case for most AI technologies? And can you map out the specific actors and how they are connected to each other through the tech they are building?

Before? yes. Military tech was usually in-house. But now, we're seeing capitalism agenda by tech giants play a big role in what, when, where, and how tech andAI-driven equipment is used by governments in the military sector. So, to answer your question about the audience it is primarily:

  • Transaction wise:
    • Governments (defense ministries)
    • Tech providers of chip and any other hardware for computational logistics
    • AI, cloud and software providers
  • Hovernance wise:
    • international organizations responsible for legal frameworks such as the EU arms control frameworks, the International humanitarian law, etc. eg. Organization for Security and Co-operation in Europe
    • Media organizations because half of what happens in conflict settings is not easily covered (because of systemic internet shutdowns in conflict, etc.)
Who is the main audience for the research you are doing? How would you like your research to ideally be used and by who?
  • The main audience for my research is actors from international organizations but I have had also in that audience, leaders of law enforcements and legal advisors of military (from African countries )
Could you shed some light on the specific security contexts in which these technologies are being deployed? Do you think there could potentially be a racial, "Western", and/or Eurocentric angle to the development of and funding towards such technologies?
  • Very good question! It entirely depends on the data that the AI model is being trained with. For instance, law enforcement and military both use more open-source intelligence (hence, the social stigmatization and racial discrimination possible in the algorithm that expects criminals and terrorists to speak or look a certain way)
  • So, if I were to give an example I'd say, the AI being used in Gaza for instance is trained on compleytely different data than the AI being used in Ukraine-Russia borders, hence, geo-political context affects the bias level and precision and all sorts of model evaluation metrics
Also on the research side of things, I'm curious to know about the sort of day-to-day tasks, methods, etc. How do you look into these questions, how do you collect data, etc.?
  • I am now in my research relying more on academic sources than limiting myself to media and humanitarain reports because academic usually shows more technical evidence and technical evidence is important for AI responsibility and accountability.

Also, my biggest challenge in my research in military AI is lack of transparency of governments and their military on the interpretability of their models. They provide short blog posts to explain how a weapon system works and they believe that makes them contributing transparently but it's not enough (not just for researchers but for all parties of conflict and all actors mentioned above)

Judging by the logic of difficulties with transparency among governments, your work would be increasingly more challenging at the moment given the number of right-wing, trigger-happy, rhetoric-driven, security-oriented government leaders popping up around the world?
  • That is true. I mentioned I'm working on 2 researches both on: Tech in border control andtech in war/conflict. But the issue with transparency is not limited to military because law enforecement, border agencies and military they all sometimes share the same tech providers and data collected (biometric or not) when it's in the same country/region (eg. EU)
Are there resources that you can share with the community on this work and topic? And finally how can folks in digital rights be a part of work uncovering such uses of AI? What actions and support can happen as you do this work?
Is your policy digest regarding Africa's POV on lethal autonomous weapons systems and the role of the African commission on human and peoples rights (ACHPR) available?