Concerns have been raised that this action could lead to serious consequences, as misinformation may "ignite" political instability and escalate violence.
On January 10, US President Joe Biden criticised Meta for abandoning its fact-checking programme. During a press conference at the White House, Biden opposed Meta's decision, calling it a “shameful” decision that undermines America’s commitment to telling the truth.
On the same day, Brazil issued Meta a 72-hour deadline to explain its information verification policy in relation to the country and its plans to protect fundamental rights on its platforms. Attorney General Jorge Messias warned that his office could take legal action against Meta if they do not respond promptly.
Previously, Meta announced adjustments to its content moderation policy, ending the third-party fact-checking program in the US and shifting to a “Community Notes” model that allows users to contribute verification insights.
However, Meta later indicated that this policy would be applied across all countries in which it operates.
The International Fact-Checking Network (IFCN) has warned of immediate serious repercussions if Meta expands this policy change beyond the US.
Currently, Meta operates its programmes in over 100 countries.
In a letter addressed to Meta's CEO Mark Zuckerberg, IFCN cautioned that some nations are particularly vulnerable to misinformation that could destabilise politics and exacerbate violence. The organisation expressed concern that if Meta decides to halt its fact-checking programme globally, it could have dire consequences in many regions.
In Europe, Meta's sudden decision regarding content regulations is putting pressure on the European Union (EU) to assert its authority in regulating the behaviour of major tech corporations operating in the region. Meta's CEO accused the EU of passing an increasing number of laws promoting censorship.
The European Commission (EC) has firmly rejected these accusations, asserting that the EU regulates content to ensure platforms like Meta do not allow misinformation, hate speech, or other harmful behaviours to proliferate on their platforms.
The EU has recently strengthened its legal tools to regulate global digital platforms. EU officials from Spain and France have called for tougher actions against billionaire Elon Musk, accusing him of interfering in European politics.
French Foreign Minister Jean-Noel Barrot urged the EC to enforce existing laws more strictly to protect Europe's public spaces. He also stated that Paris would take separate intervention measures if the European Commission does not act promptly.
Earlier, at the end of 2024, the European Commission fined Meta nearly 800 million EUR for violating EU antitrust regulations. This fine is among the largest imposed by the EC on major tech corporations in recent years, ranking among the top ten penalties related to antitrust regulations by the EU.
Additionally, the EC found that Meta has abused its dominant market position by imposing unfair business conditions on competitors in advertising services across its platforms, including Facebook and Instagram.
Meta's decision is viewed as irresponsible in an era where information technology is becoming increasingly influential across all aspects of social life; countries are recognising that using and developing social networks such as Facebook and Instagram can be a “double-edged sword”.
If information from these tech platforms is not adequately verified and managed, the consequences could be unpredictable. Therefore, warnings from the EU and sharp criticisms from President Biden's administration, along with Brazil's legal actions against Meta's lax approach to information verification, are necessary measures to early prevent potential risks of political instability and violence.