Combating disinformation in the age of AI

Combating disinformation in the age of AI

Find out how AI is fuelling global disinformation and the efforts being made to counter these threats using technological and legal solutions.

AI-fuelled disinformation** has become a growing global threat, particularly in political and social contexts. Recent examples, such as the Brazilian elections in 2022, illustrate the negative impact of this technology on democracies. Initiatives such as detection tools and verified content are being explored to stem this tide of false information. However, these approaches are not infallible and governments around the world are seeking to put in place appropriate legislation to make major technology companies accountable in the face of this growing threat.

The challenges posed by AI in spreading disinformation

Artificial intelligence has transformed the way information is produced and consumed around the world. With the rise of deepfakes and the automated manipulation of online content, disinformation is taking on a new dimension. During the 2022 presidential elections in Brazil, social networking platforms were inundated with false information, directly threatening the country’s democratic stability.

The example of Judge Alexandre de Moraes, who ordered the suspension of the social network X (formerly Twitter) in August 2023, demonstrates the complexity of the fight against these scourges. While the initial aim was to protect Brazilian democracy, these measures have raised concerns about limiting individual freedoms and censorship.

In addition, the World Economic Forum has ranked disinformation as one of the main global risks in 2024. This highlights the scale of the problem and the need to find solutions to regulate the way information circulates in digital spaces.

Sophisticated AI systems capable of generating misleading content on a large scale for as little as 400 euros, as demonstrated by the CounterCloud.io experiment, show just how easy it is to launch disinformation campaigns with minimal investment. This raises ethical and technical challenges that need to be addressed quickly to prevent these technologies from further undermining the foundations of truth and trust in the media.

The harmful effects of AI-fuelled disinformation

Using AI to spread disinformation can have deep consequences. The impacts go far beyond simply influencing public opinion. For example, the polarisation of society and the distortion of electoral processes can be exacerbated by automatically generated disinformation campaigns. In some cases, this can lead to violent social movements or a general erosion of confidence in institutions.

A concrete example is the ability of deepfakes to alter the perception of real events. These falsified videos, which can perfectly imitate the speeches or actions of public figures, have already been used to discredit political figures or create international tensions. In India, for example, videos of this type have been used to influence voters during political campaigns. The manipulation of historical facts is another method of disinformation, as demonstrated by the experiment conducted by CounterCloud.io, where AI invented events to sow doubt.

These phenomena can also encourage people to abstain from voting, creating an imbalance in the democratic process. According to some studies, online disinformation can lead to up to 15% additional abstention in contexts where AI is used to systematically spread misleading information.

Combating disinformation in the age of AI

Automated propagation of misinformation: a new paradigm

Recent technological developments show that AI systems can not only generate disinformation, but also distribute it automatically on an unprecedented scale. The experiment conducted by CounterCloud.io, which demonstrated that a single server could automate the production and distribution of misleading content across multiple platforms, highlights the urgent need to find robust solutions.

In this experiment, the AI used a so-called ‘gatekeeper’ module to select target content, then wrote counter-articles, created fake journalist profiles and generated fake comments. This process was completely autonomous and cost just a few hundred euros. The viralisation of this content is facilitated by algorithms optimised to maximise its dissemination, making the fight against disinformation more complex.

These technologies raise concerns about the possibility of a single individual or group being able to manipulate public opinion on a global scale with a derisory budget. By comparison, traditional disinformation campaigns required considerable human and financial resources. Today, a simple developer can set up a sophisticated information manipulation operation using a few lines of code and a cloud server.

Technological solutions: detection, traceability and resilience

Faced with this growing threat, several technological solutions are being developed to detect and prevent the distribution of AI-generated content. Among these, content traceability via watermarking techniques is often cited as a promising approach. This method involves marking AI-generated content with invisible signatures, enabling it to be identified as artificially produced.

However, these techniques are not infallible. Watermarks** can be easily erased by manipulations such as file compression or editing, making them difficult to apply on a large scale. What’s more, current detection tools are not always reliable. Genuine images of disasters or conflicts are sometimes falsely classified as fake news, leading to unforeseen consequences.

The concept of provenance of content is another possible solution. It makes it possible to guarantee the transparency of content, by indicating how and by whom it was created. This approach is supported by standards such as C2PA (Coalition for Content Provenance and Authenticity), which embed cryptographic information in multimedia files to guarantee their authenticity. While this method can indeed strengthen trust, it requires massive adoption by social media platforms and news agencies to become a viable solution.

Regulation and accountability of platforms

Beyond technological solutions, regulation plays a central role in the fight against misinformation. Countries such as the European Union and China have already passed laws requiring platforms to report AI-generated content. In 2022, the EU’s Digital Services Act (DSA) introduced obligations for platforms to ensure the transparency of their algorithms and report misleading content.

However, implementing such laws presents challenges. Technology platforms** such as X (formerly Twitter) or Facebook have complex and often opaque algorithms, making it difficult for regulators to monitor their practices. In addition, some of these platforms operate in countries with less stringent legislation, which complicates international cooperation.

The solution could lie in a multilateral approach, where several countries work together to harmonise their regulations and ensure stricter control of digital content. In the short term, this could include financial penalties for companies that fail to comply with the directives, or tighter restrictions on platforms that refuse to cooperate with local authorities.

Towards global collaboration

The fight against AI-fuelled disinformation is a global challenge requiring coordinated efforts between governments, technology companies and civil society. While technological solutions offer interesting avenues, they must be combined with strict regulation and increased public awareness to be truly effective.

The introduction of strong laws, combined with transparency and content traceability initiatives, could represent a significant step forward. However, as the example of Brazil has shown, these efforts must be conducted with caution so as not to compromise individual freedoms or restrict freedom of expression.

War Wings Daily is an independant magazine.