|
|

Digital Rights Weekly Update: 13 - 19 March 2026

2026/03/19
Weekly Reports
Digital Rights Weekly Update: 13 - 19 March 2026

Policy Insight:

This week, 7amleh released new research exposing how EU funding and exports of high-risk AI systems are enabling surveillance, control, and repression. The findings reinforce core warning on harmful AI ecosystems that are being financed, tested, and scaled across borders.

Nowhere is this clearer than in the expansion of AI warfare. As seen in the Gaza Genocide, and more recently in the war of aggression against Iran: AI-driven targeting systems are accelerating the speed and scale of military action at devastating civilian cost. As one analysis by Avner Gvaryahu in the Guardian put it, “Gaza was the laboratory. Minab is the market,” referring to the bombing of the girls school in Minab, where at least 168 people were killed, most of them children, girls aged seven to 12.

At the center of this expansion of AI warfare is Palantir Technologies, whose integration into military data systems raises urgent concerns about the automation of targeting and the normalization of data-driven warfare.

At the same time, decisions by platforms such as Meta’s Instagram weakening encryption protections for private messaging signal a broader erosion of privacy. From AI warfare to platform governance, unchecked technologies are consolidating power over life, data, and security, with Palestinians at the forefront. This moment demands exposing data exploitation, resisting the normalization of surveillance, and pushing for enforceable protections that defend privacy against an expanding and increasingly entrenched technological threat.

News Digest

New Report Examines the Role of EU Funding and Export of High-Risk AI Systems in Escalating Human Rights Violations in Palestine and the Region

7amleh

March 16, 2026, 7amleh – The Arab Center for the Advancement of Social Media has released a new report titled “How EU Funding and Exports of High-risk AI Systems Exacerbate Severe Human Rights Violations in Palestine and the Broader Region.” The report provides an in-depth analysis of how European policies contribute to financing and exporting advanced digital technologies used in contexts of surveillance, control, and repression, including in the occupied Palestinian territory and countries across the Middle East and North Africa. The report shows that the European role does not end with developing regulatory frameworks for artificial intelligence within its own borders. Through funding programs, investment mechanisms, and technology exports, the EU also supports the spread of high-risk systems abroad, where they are used in areas such as migration management, biometric surveillance, predictive security systems, and data analysis. In the absence of strong safeguards and oversight, these technologies contribute to deepening violations related to freedom of expression, privacy, freedom of movement, and political participation.

These aren’t AI firms, they’re defense contractors. We can’t let them hide behind their models

The Guardian

There is an Israeli military strategy called the “fog procedure”. First used during the second intifada, it’s an unofficial rule that requires soldiers guarding military posts in conditions of low visibility to shoot bursts of gunfire into the darkness, on the theory that an invisible threat might be lurking. It’s violence licensed by blindness. Shoot into the darkness and call it deterrence. With the dawn of AI warfare, that same logic of chosen blindness has been refined, systematized, and handed off to a machine. Israel’s recent war in Gaza has been described as the first major “AI war” – the first war in which AI systems have played a central role in generating Israel’s list of purported Hamas and Islamic jihad militants to target. Systems that processed billions of data points to rank the probability that any given person in the territory was a combatant.

Blood tech: The UK ambassador, the sex offender, Palantir, and Gaza

Aljazeera

According to Open Intel, a platform tracking corporate involvement in the Gaza genocide, Palantir has actively recruited veteran members of Israel’s cyber intelligence wing, Unit 8200. After agreeing to what its website refers to as a “strategic partnership” with Israel in January 2024, the company significantly stepped up its operations in Gaza and the occupied West Bank, combining various data sets from intercepted communications, satellite and other online data to compile targeting, or “kill lists”, for the Israeli military. While Palantir characterises its technology as an analytical tool rather than a direct targeting system, its integration into Israeli command-and-control workflows has drawn criticism from human rights researchers. Senior figures at the United Nations have also argued that technologies such as Palantir’s materially shape the pace and scale by which the Israeli army is able to target people.

AI Accelerates Warfare In Iran But Raises Urgent Humanitarian Risks

Modern Diplomacy

The push for speed in military operations has a long history. During World War II, the targeting cycle from collecting reconnaissance photos to planning strikes could take weeks or months. In the 1991 Gulf War, mobile missile launchers used “shoot and scoot” tactics, requiring rapid tracking and response. The armed Predator drone, first used in 2002, represented a breakthrough. High-resolution video could be transmitted in real time from the drone to U.S. operators, who could immediately fire missiles on targets. Today, AI amplifies this concept, allowing strikes at speeds human operators could never achieve. The rapid pace of AI-enabled targeting carries enormous risks for civilians. In Gaza, Israeli AI systems Lavender and Gospel have reportedly been programmed to tolerate up to 100 civilian casualties or sometimes more for a single suspected combatant. Since October 2023, more than 75,000 people have died in Gaza under such operations.