Markets by Trading view

Are tech firms complicit in human rights abuses in Ethiopia’s Tigray region?

Facebook
Twitter
LinkedIn

In December 2022, a lawsuit was brought against Meta in Kenya’s High Court. The legal action claimed that Facebook, as the company was of course previously called, promoted hate speech on its platform that led to ethnic violence and killings in Ethiopia. The allegations centred particularly around events in the Tigray region, which was embroiled in civil war between November 2020 and November 2022.

According to Fisseha Tekle, legal adviser at Amnesty International and one of the figures behind the case, “in Ethiopia, the people rely on social media for news and information. Because of the hate and disinformation on Facebook, human rights defenders have also become targets of threats and vitriol. I saw first-hand how the dynamics on Facebook harmed my own human rights work and hope this case will redress the imbalance.”

The legal action is also being initiated by Abrham Meareg, whose father Meareg Amare was hunted down and killed as part of the ethnic violence in Tigray in November 2021. He had been subjected to a campaign of incitement and hatred on Facebook, with increasingly large numbers of users calling for violence against him. Facebook was alerted to this but only removed the post a week after his death. The case alleges that Facebook is “awash with hateful, inciteful, and dangerous posts in the context of the Ethiopia conflict” and that the company bears responsibility. This is the legal basis of the claimants’ $1.6 billion case against the global tech firm.

Such cases are not isolated, nor confined to Ethiopia or Tigray. In 2018, for example, the United Nations (UN) reported that social media played a large role in the 2017 Rohingya genocide in the country’s Rakhine area, with Facebook being highlighted as a “useful instrument” in the dissemination of prejudicial material. In the context of the Tigray, the UN Special Advisor on the Prevention of Genocide said that similar content had “fuelled the normalisation of extreme violence.” Another platform, Telegram, was also used by the military junta in Myanmar to target dissenting voices.

As the world continues to move rapidly towards an ever-more digital future, it is clear that social media platforms have an increasingly large role to play in global politics, including in warzones and hostile environments. But are these platforms up to the job?

In 2021, the astonishing revelations of Facebook whistleblower Frances Haugen suggested not. Part of the picture as revealed by Haugen demonstrated the platform’s “practical” failings. For example, she noted that Facebook has a serious shortage of safety controls on the platform. This shortage is particularly acute in non-English markets, including those across Africa and the Middle East, where the firm has failed to invest in local content moderators. In the Nigerian elections earlier this year, there were even concerns that widespread disinformation on Meta and Twitter would undermine the legitimacy of the democratic process more broadly.

Arguably more concerning is Haugen’s allegation that Meta’s failings go further than this – and are in some senses deliberate. Haugen argued that the social media platform’s algorithms are in fact programmed to give more prominence to divisive or abusive language and content because that is seen as the best way to generate clicks, engagement, and advertising revenue.

Amnesty International has argued that “Meta uses engagement-based algorithmic systems to power Facebook’s news feed, ranking, recommendations and groups features, shaping what is seen on the platform. Meta profits when Facebook users stay on the platform as long as possible, by selling more targeted advertising. The display of inflammatory content is an effective way of keeping people on the platform longer.” Is Facebook in fact profiting from the type of violence we have seen in Tigray and elsewhere?

It is worth pointing out that Facebook has always denied this charge. In a statement responding to Haugen’s claims, the firm said: “at the heart of these stories is a premise which is false. Yes, we’re a business and we make profit, but the idea that we do so at the expense of people’s safety or wellbeing misunderstands where our own commercial interests lie. The truth is we’ve invested $13 billion and have over 40,000 people to do one job: keep people safe on Facebook.”

Britain’s former Deputy Prime Minister and Meta’s Vice-President for Global Affairs and Communications, Nick Clegg, has also denied this is the case, for the simple commercial reason that “advertisers do not want their content next to unpleasant content.”

Whether tech platforms are deliberately amplifying hateful content or not, the simple fact remains that this hateful content is still prominent on the platforms and can have devastating real-life effects. Their failure to remove clearly inciteful content is a testament to their reluctance to invest the requisite sums in rigorous monitoring and removal procedures. This reluctance comes at the cost of human lives.

“My father was killed because posts published on Facebook identified him, accused him falsely, leaked the address of where he lives, and called for his death,” said Meareg. “My father’s case is not an isolated one. Around the time of the posts and his death, Facebook was saturated with hateful, inciteful, and dangerous posts.”

“Many other tragedies like ours have taken place.”

Author: Harry Clynch

#Ethiopia #Tigray #Meta #Facebook #Telegram

Leave a Reply

Your email address will not be published. Required fields are marked *

Related Posts

Trending

Write your email to verify subscription

Loading...

Sign up for our free newsletter and receive the latest banking and fintech stories, straight to your inbox - every week