Meta’s recent decision to
eliminate professional fact-checkers and replace them with community notes has
sparked global criticism from users and organizations, who see the move as a
potential gateway to misinformation and societal harm. Meta CEO Mark Zuckerberg
announced the policy shift last week, citing concerns over perceived political
bias among fact-checkers and their alleged erosion of public trust.
While Meta frames this
change as a step toward promoting free expression, critics argue that it
signals a retreat from accountability and opens the door to unchecked
misinformation.
The Campaign on Digital
Ethics (CODE), a South African non-profit advocating for equitable digital
practices, has condemned Meta’s decision as a “reckless gamble” with free
speech. In a strongly worded statement, CODE warned that the removal of
professional oversight would exacerbate global misinformation crises.
“Poor content moderation has
already caused real-world harm, including election interference in the U.S. and
Brazil, violence against the Rohingya and Tigrayan communities, and the spread
of conspiracies during the COVID-19 pandemic,” said CODE. “Replacing fact-checkers with
community notes risks triggering even greater societal divisions and harming
vulnerable communities on a global scale.”
CODE argues that Meta’s
justification—presented as a commitment to free expression—is disingenuous,
masking an effort to cut costs and prioritize profits over public safety. The
organization believes the move sacrifices the integrity of the information ecosystem,
creating a fertile ground for misinformation, particularly in countries like
South Africa, where political tensions and historical inequalities make
unregulated online discourse especially dangerous.
“By dismantling safeguards
and replacing them with flawed systems, Meta is not democratising truth − it is
weaponising misinformation,” said Kavisha Pillay, executive director of CODE.
CODE has called on the South
African government to develop robust regulatory frameworks, similar to the
European Union’s Digital Services Act, to hold tech companies accountable for
the content they host. The organization also urged the implementation of a
nationwide digital literacy campaign to equip citizens with the tools to
responsibly navigate the digital landscape.
The Ateneo Human Rights
Centre (AHRC) in the Philippines has also voiced its concerns, emphasizing the
relevance of fact-checking in an era dominated by AI-generated content. AHRC
highlighted the risks posed to countries like the Philippines, particularly in
the lead-up to elections, as the absence of professional oversight could allow
disinformation to thrive unchecked.
AHRC called for vigilance
among Meta users and urged collective action to combat online falsehoods.
Volker Türk, the United
Nations high commissioner for human rights, weighed in on LinkedIn, cautioning
against the dangers of unregulated digital spaces. “When we call efforts to
create safe online spaces ‘censorship,’ we ignore the fact that unregulated
space means some people are silenced—particularly those whose voices are often
marginalized,” he wrote. “Allowing hatred online limits free expression
and may result in real-world harms,” he added.
By: Kanto
Okanta