Content moderation in conflict zones: What role for big tech?

[Published here May 21, 2021]

May 21 (Thomson Reuters Foundation) – Conflict broke out this month between Israel and the southern Gaza Strip – and big tech has not been spared.

Instagram and Twitter have blamed technical errors for deleting posts mentioning the possible eviction of Palestinians from East Jerusalem, but data rights groups fear “discriminatory” algorithms are at work and want greater transparency.

Tensions first started in early May over the possible evictions of Palestinians from their homes in Jerusalem’s Sheikh Jarrah neighbourhood but soon escalated into a full-blown conflict between Gaza and Israel. A ceasefire announced overnight is set to put an end to fighting.

Can big tech stay neutral when conflicts erupt?

How have tech giants responded?

Facebook, Twitter, Google and Venmo all declined to comment on whether criticism over the Gaza-Israel conflict had prompted broader internal reviews of their content moderation processes or other policies.

But they have responded to individual criticism: on May 8, Instagram publicly apologised for the deletions of Sheikh Jarrah posts and suspensions of accounts.

This week, its parent company Facebook set up a round-the-clock “special operations center” to deal with content on the Israeli-Palestinian conflict.  

“It is staffed by experts from across the company, including native Arabic and Hebrew speakers,” a Facebook spokesman told the Thomson Reuters Foundation.

Facebook has established other “special operations centers” to deal with content on COVID-19, wildfires in California and Australia in recent years, violence in Myanmar, and major elections, including in the United States.  

A Twitter spokeswoman said the company uses “a combination of technology and human review to enforce the Twitter Rules…impartially,” but did not specify whether it had created a dedicated team on the Gaza flare-up.  

Following outrage on social media, YouTube recently deleted a video linked to the Israeli government that depicted rocket fire. A spokeswoman said the company uses automated systems to find content “at scale” while humans help with “contextual decisions” on removing content regardless of language.

What’s tech got to do with the conflict? 

Palestinians took to social media earlier this month to protest the possible Sheikh Jarrah evictions, but many found their posts, photos, or videos removed or their accounts blocked.

Facebook and Twitter blamed “technical glitches” and said the posts and accounts would be restored, but data rights groups found deletions continued, even if the posts depicted no violence or incitement.

The groups accused the tech platforms of “censoring” Palestinian voices through discriminatory content moderation policies, which they said should be made more fair and transparent.  

In protest at the continued restrictions, users are rating Facebook with one star on Android and Apple stores, dragging its average rating down.

The raters left comments including “biased policies against Palestinian people”, “no freedom of speech,” and “Free Palestine”. Similar comments were left alongside one-star ratings for Instagram.  

Others have accused Venmo, a peer-to-peer payment service owned by PayPal, of “systemic financial discrimination” after delaying some donations to Palestinian relief organisations.

A Venmo spokesman attributed the delays to “compliance obligations” with sanctions. Hamas, the Islamist Palestinian group which controls Gaza, and affiliated organisations are on a U.S. terrorism blacklist. 

Google, meanwhile, saw internal complaints: a group of Jewish Google employees urged their leadership to make a public statement on the conflict and fund Palestinian rights organisations.

Have tech companies been caught up by conflict before? 

In 2018, a United Nations fact-finding mission criticised Facebook for allowing posts including anti-Muslim hate speech and calls for violence between the military and ethnic groups in Myanmar.

Last year the Atlantic Council’s Digital Forensic Research Lab found that pro-Azerbaijan accounts were manipulating traffic in a “Twitter war” against their online enemies during the six-week war between Christian Armenia and mainly Muslim Azerbaijan over the enclave of Nagorno-Karabakh.

What does the law say about content moderation and freedom of speech? 

Tech companies in the United States – where most are headquartered – are granted broad protections under Section 230 of the Communications Decency Act of 1996, which frees online platforms like Facebook and Twitter from legal responsibility for what others say or do on their sites. 

That has left content regulation to the firms themselves, exposing them to criticism.

“Frankly, I don’t think we should be making so many important decisions about speech on our own either,” Facebook CEO Mark Zuckerberg said in 2019, announcing the creation of an independent Oversight Board to make binding decisions on whether flagged Facebook content should stay up or be removed. 

“We’d benefit from a more democratic process, clearer rules for the internet, and new institutions,” he said.

Jillian York, director of international freedom of expression at advocacy group Electronic Frontier Foundation, said that “tech companies are not legally bound to be neutral in any way.”

She pointed to what she said were inherent biases on tech platforms against certain Arabic names being used in page titles, or the portrayals of women’s bodies.

The platforms’ internal policies do not mention neutrality either, and all tech firms interviewed by the Thomson Reuters Foundation declined to comment on their neutrality policies during conflict.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s