Meet the Future of AI – Support for the European Democracy Shield: policy, practice and partnerships
Event & purpose (2 Dec 2025): Meet the Future of AI convened EU policymakers, journalists, researchers, fact-checkers, media professionals and civil society to explore how responsible, human-centred AI can support the European Democracy Shield (EUDS) through policy alignment, practical tools and cross-sector partnerships.

Event summary
The Meet the Future of AI event brought together policymakers, journalists, researchers, fact-checkers, media professionals and civil society actors to explore how artificial intelligence can support the European Democracy Shield (EUDS) through policy alignment, practical tools and cross-sector partnerships. Against a backdrop of accelerating technological change, hybrid threats and increasingly sophisticated disinformation campaigns, the event examined how Europe can protect democratic processes while preserving fundamental rights, media freedom and citizen trust.
Discussions highlighted that AI-driven disinformation is no longer marginal but structural to the contemporary information environment. Synthetic content is growing in both volume and realism, enabling low-cost, high-impact manipulation of public debate and elections. Speakers consistently stressed that while Europe has developed robust regulatory frameworks, such as the Digital Services Act, the Code of Practice on Disinformation and the emerging European Democracy Shield, their effectiveness depends on coordinated enforcement, operational capacity and sustained collaboration across stakeholders.
Across three thematic sessions, the event addressed: (i) the policy foundations and governance architecture of the EUDS; (ii) the concrete impact of disinformation on elections in the EU and Eastern Europe; and (iii) the role of AI tools, evaluation frameworks and media literacy in strengthening democratic resilience. EU-funded research and innovation projects demonstrated practical, human-centred AI solutions supporting fact-checkers, journalists and citizens, while underscoring the necessity of human-in-the-loop approaches, ethical safeguards and transparency.
A unifying message emerged: defending democracy in the digital age requires a continuous, ecosystem-based effort that aligns regulation, technology, professional practice and citizen empowerment. The event positioned the European Democracy Shield not only as a regulatory initiative, but as a living framework that depends on partnerships, shared infrastructures and long-term investment in trust, literacy and resilience.
Intro
The event opened with an introduction by Silvia Boi (Sistemi Nalder, AI-CODE), who underlined that Europe’s democratic systems are currently facing profound challenges. Our information environment is evolving rapidly, as technological acceleration, hybrid threats, foreign interference, and increasingly sophisticated disinformation campaigns reshape how citizens access information, how trust is built, and how public debate develops.
This event invites us to reflect on how responsible, human-centred AI can reinforce Europe’s resilience in this complex landscape. AI cannot and should not act alone, but when guided by clear democratic principles, ethics, and transparency, it can support early detection, threat analysis, content verification, and a deeper understanding of emerging risks.
This is precisely the ambition behind the European Democracy Shield: to create a coordinated, cross-border architecture that strengthens Europe’s capacity to prevent, detect and respond to digital manipulation.
A key element of this vision is building a more integrated European ecosystem for situational awareness, where policymakers, researchers, fact-checkers, civil society and media organisations work together instead of in silos
Opening
- Shared threat assessment: AI-driven disinformation is now structural to the information environment-cheap, fast and increasingly realistic-amplifying manipulation of public debate andelectoral integrity (“disinformation on steroids”).
- Strategic priorities (three-pillar approach): (1) protect the integrity of the information space(incl. labelling/recommender responsibility and networks like EDMO), (2) strengthen democratic institutions (free and fair elections; free and independent media), and (3) build societal resiliencevia citizen awareness and media literacy.
Krisztina Stump – Head of the Media Convergence and Social Media Unit at the European Commission
Krisztina Stump framed artificial intelligence as both a powerful enabler and a growing risk for Europe’s democratic systems. In the context of the European Democracy Shield (EUDS), she stressed that AI is dramatically amplifying the scale and impact of disinformation, turning it into what she described as “disinformation on steroids”. This escalation poses a direct threat to democratic debate, electoral integrity, and citizens’ trust in information.
AI-generated content is now a structural part of the information landscape. Its presence is visible not only in the sheer volume of content produced, but also in its increasing realism and sophistication. Recent electoral experiences, such as the Irish presidential elections, illustrate how AI is used both to generate misleading narratives and to strategically disseminate them, exploiting social media dynamics and recommendation algorithms to maximise reach and influence.
To respond to these challenges, Krisztina outlined a framework built around three interconnected pillars. The first focuses on safeguarding the integrity of the information space, through instruments such as EDMO and the European network of fact-checkers, as well as ongoing work under the Code of Practice on Disinformation, including improved content labelling and more responsible recommendation systems.
The second pillar centres on strengthening democratic institutions, with particular attention to protecting free and fair elections and supporting free and independent media.
The third pillar aims at boosting societal resilience, recognising citizen awareness and media literacy as essential long-term defences against manipulation.
Within this policy architecture, EU-funded projects are already delivering practical tools tailored to different actors in the information ecosystem. vVera.AIai and AI4Trust support fact-checkers; TITAN and AI4Debunk empower citizens; while Elliot and AI-Code provide resources for journalists and media professionals navigating AI-driven environments. Together, these initiatives demonstrate how policy objectives can be translated into actionable, on-the-ground solutions.
Krisztina closed with a clear call to join forces. Events like this, she noted, are not only spaces for discussion but opportunities to build new connections, spark collaborations, and collectively develop ideas that strengthen Europe’s democratic resilience in the age of AI.
Session 1 – Policy Context: The European Democracy Shield (EUDS)
Policy bottom line: Europe’s frameworks (notably the Digital Services Act and Code of Practice on Disinformation) are seen as robust, but their effectiveness depends on consistent enforcement, timely action and stronger operational capacity.
- How to operationalise EUDS: Position the Shield as a living, cross-border architecture built on shared situational awareness, incl. a European Centre for Situational Awareness, aStakeholder Platform for real-time information sharing, and a proposed Common Research Support Framework to give trusted actors privileged access to infrastructure, tools and relevant data.
- Media & trust constraints: Participants stressed a “moment of crisis” for independent journalism and fact-checking, the need for sustainable funding models, harmonised standards/metadata, and stronger AI literacy for journalists, alongside attention to threats like spyware and SLAPPs; public-service innovations (e.g., verified-news chatbots) were highlighted as trust-building examples.
- Added proposal for the upcoming MFF: Translate the repeated call for sustainable support into a dedicated multiannual EUDS envelope in the next MFF, funding shared situational-awareness infrastructure, trusted data/standards, independent journalism & fact-checking capacity, and continued R&I plus literacy initiatives.
The opening session of the panel discussion, moderated by Paolo Cesarini (EDMO), explored the policy context and collaborative opportunities surrounding the European Union Democracy Shield (EUDS). The session brought together representatives from the European Commission, journalism, media organizations, and the fact-checking community, reflecting the multi-stakeholder approach that underpins EUDS. The discussion emphasiszed the evolving policy landscape, research collaboration, technological innovation, and the critical need to protect the integrity of Europe’s information ecosystem. Paolo Cesarini set the tone by underscoring the importance of dialogue and cooperation across different sectors, regulators, researchers, broadcasters, journalists, and fact-checkers. He framed the EUDS as a bridge between these communities, ensuring that policy, research, and practice are aligned in the common goal of strengthening democratic resilience against information threats.
Alberto Rabbachin – DG CONNECT
Alberto Rabbachin elaborated on the European Commission’s strategic vision behind the EUDS, positioning it as a comprehensive framework to safeguard, strengthen, and promote democracy in the EU. He cited the Special Eurobarometer report on protecting and promoting democracy, noting that European citizens are increasingly aware of the vulnerability of democratic processes and expect proactive measures to defend them. Rabbachin highlighted the Digital Services Act (DSA), together with the CoC on Disinformation, as a foundational tool for maintaining the integrity of the information space and countering foreign information manipulation and interference (FIMI). He also stressed the role of the European Centre for Situational Awareness, envisioned as a hub connecting all relevant Meamber States actors and the new Stakeholder Platform as a dedicated forum to support real-time information sharing and cross-sector collaboration. On technology, Rabbachin recogniszed AI as both a challenge and a potential solution. He outlined ongoing EU initiatives offering long-term funding to develop detection tools (e.g., the Verification Pplugins under vera.aiAI and WeVerify) for identifying AI-generated disinformation. He also introduced the EUDS proposal of a Common Research Support Framework, which would provide researchers and civil society with privileged access to infrastructure, tools, and relevant data to develop actionable, evidence-based insights.
Marie Bohner – Agence France-Presse (AFP)
Marie Bohner welcomed the EUDS as a timely and much-needed initiative for media and fact-checking communities currently under severe strain. She described this as a “moment of crisis” for independent journalism, stressing the importance of a whole-of-society response that includes platforms, policymakers, and citizens. Bohner noted that fFact-checking partnerships with social media platforms have been effective in reducing misinformation on those platforms, but sustained progress depends on strengthening citizen awareness. She warned against uncritical reliance on AI tools, reminding that AI is not always correct or to be trusted. Bohner called for continued innovation and independence, emphasiszing harmoniszed metadata standards across Europe (through initiatives like EFCSN and the Horizon AI Cluster) as a cornerstone for building resilience and credibility in the information space.
Renate Schroder – European Federation of Journalists (EFJ)
Renate Schroder focused on the intersection between journalism and fact-checking, describing them as complementary functions bound by the same mission of truth and accountability. She endorsed the holistic approach of EUDS and outlined three core priorities: (i) enforcement of strict rules and systematic risk assessment, (ii) the urgent need for sustainable funding models for journalism, and (iii) strengthening AI literacy and digital skills among journalists. Schroder also emphasiszed the need for European-built AI models, free from dependence on commercial systems, and called for special attention to threats such as spyware attacks and SLAPPs against journalists. She urged more robust EU support systems and cross-media collaboration to protect press freedom and uphold democratic values.
Wouter Gekiere – European Broadcasting Union (EBU)
Wouter Gekiere framed the EUDS as a regulatory framework that ultimately serves the public interest. Beyond compliance, he urged innovation in the way public service media engage citizens, particularly in helping audiences identify credible information sources. Gekiere discussed the EBU’s AI chatbot, launched in several countries, which allows citizens to query accurate, verified news – an example of how technology can reinforce trust. He advocated for collaborative funding structures linking media, fact-checkers, and other actors to develop tools aligned with democratic values rather than profit motives.
Across all interventions, a consensus emerged: the EUDS represents a vital step toward a coordinated European effort to protect democracy in an age of digital vulnerabilities. The panel highlighted the need for sustainable funding, ethical AI, cross-sector partnerships, and citizen engagement. As Europe strengthens its democratic shield, the success of this strategy will depend not only on regulatory robustness, but also on the resilience, cooperation, and innovation of all actors involved.
Session 2: Disinformation, Elections and Democracy: Insights from the EU and Eastern Europe
- Elections lens: Disinformation ecosystems operate continuously, not just pre-vote; examples described “blitzkrieg-style” attacks and AI-generated political content that can outpace institutions lacking agility and compute, reinforcing the need for permanent readiness, not episodic crisis response.
The second session gave the floor to journalists, researchers and regional observers, with a particular focus on the role of disinformation in elections in both the EU and Eastern Europe.
The discussion opened with a light remark from Francesco Saverio Nucci (AI-CODE), who defined himself as the “end-user of elections.” Behind the joke, however, lay a serious common concern: European democracies are increasingly exposed to sophisticated, persistent, and low-cost waves of disinformation, and current responses are not keeping pace.
Clara Jiménez Cruz, Maldita & EFCSN
Clara Jiménez Cruz of Maldita and EFCSN set the tone early by warning against the illusion that election integrity can be defended only in the days preceding a vote. Disinformation ecosystems operate continuously. Bad actors are entrenched, equipped with loyal audiences, and behave like “trojan horses” within democratic systems, quietly eroding trust until the moment of political vulnerability arrives. She highlighted a paradox: Europe has strong legislation to counter manipulative online influence, yet its enforcement remains timid. Worse, she argued, hostile actors are actively working to make sure the EU does not enforce its own laws. Her central plea was simple: apply the law, because the regulatory tools are already on the table.
Péter Krekó, Political Capital Institute
Building on this, Péter Krekó described Hungary as a Laboratory of the “Post-Advertisement” Era and a testing ground for a new, unsettling phase of political communication. Ahead of the 2026 elections, political actors have begun flooding the information space with AI-generated memes, cartoons, photos, and deepfake videos. This content circulates not necessarily because it is believed, but because, true or false, it affects perceptions, emotions, and ultimately voting behaviour. AI-generated political ads are particularly hard to regulate, able to bypass the filters that platforms apply to traditional political advertising. Krekó pointed to Denmark’s new regulatory approach, which places responsibility squarely on platforms, as a potential model for the EU, if it proves effective.
Manuela Preoteasa, Euractiv Romania (AI4Debunk project)
Turning to Romania, Manuela Preoteasa painted a vivid picture of elections taking place in a digital sphere saturated with zero-cost, high-impact disinformation. Newly activated accounts, coordinated volunteer networks, and sustained amplification efforts framed both the annulled elections and their re-run. Among the narratives that spread were conspiracies, mocking portrayals of politicians as “Western puppets,” and rapid-fire “blitzkrieg-style” misinformation attacks that overwhelmed public discourse. Francesco re-entered the discussion to highlight the challenge of timing: European institutions lack both the computational power and the operational agility to counter such waves quickly and effectively.
Viktoras Dauksas, Debunk.org (AI-CODE project)
For Viktoras Dauksas of Debunk.org, this gap is structural. State-backed threat actors have built entire industries around influence operations, while Europe still lacks a comparable defensive ecosystem. Even basic online scams, profitable for platforms and harmful for citizens, go insufficiently enforced. He suggested exploring a “cybersecurity bounty” model to detect vulnerabilities and vectors that allow disinformation to spread: incentives for experts to find the weak points before malicious actors do. Dauksas also criticiszed the near absence of educational and awareness content on major platforms during elections, raising the question: how can we reach people if the platforms won’t show them trustworthy information? His strategic proposal: invest in building robust datasets to train European LLMs, tools that operate within EU values, norms, and oversight.
Kevin El Haddad, Uni of Mons (AI4Debunk project)
Finally, Kevin El Haddad from the University of Mons (AI4Debunk) reminded participants that any technological solution must start with an understanding of ordinary users. Fact-checking tools only work when people use them, and most citizens simply won’t engage if it requires effort. He outlined cognitive pitfalls such as “Implied Truth”, when only some posts are flagged, the unflagged ones are mistakenly assumed to be true, and “Lazy Accuracy”, where people skip verification if they believe they already know the answer. A deeper issue, he added, is trust: many do not feel comfortable relying on a machine to guide their judgment. The ideal solution would be fact-checking systems embedded directly within social media platforms, requiring no action from the user, but implementing such a model raises governance, transparency, and platform-cooperation challenges.
As a shared conclusion, we should note across their different perspectives, all speakers converged to a similar diagnosis: Europe faces persistent, sophisticated disinformation threats, yet its response is fragmented and underpowered. The tools, the knowledge, and even the legal frameworks often exist, what is missing is consistent enforcement, timely action, and user-centric design. The underlying message was clear: defending European elections requires a continuous, coordinated, and technologically empowered effort, one that matches the scale and speed of the threats it seeks to counter.
Session 3: AI & Disinformation – Tools, Evaluation and Regulation
- Tools showcased as building blocks: EU-funded projects demonstrated human-in-the-loop approaches across the disinformation lifecycle, AI4TRUST, TITAN, AI-CODE, AI4DEBUNK, PROMPT, emphasising transparency, accountability, and literacy/critical-thinking support rather than automated judgement.
Session 3, chaired by Riccardo Gallotti (FBK, AI4TRUST, AI-CODE) examined the challenge of online disinformation across its full lifecycle: detection and monitoring, assessment and debunking, and the longer-term role of media and AI literacy in strengthening societal resilience. The session showcased concrete outputs and demonstrations from EU-funded projects AI4TRUST, TITAN, AI-CODE, AI4DEBUNK and PROMPT, illustrating how advanced AI tools can support democratic resilience while remaining human-centred, transparent, accountable and aligned with EU regulation. A complementary perspective was provided by ELLIOT, which highlighted the importance of integrating social sciences and humanities into technical approaches.
Collectively, the projects demonstrated tangible contributions to the European Democracy Shield, linking technological innovation with democratic values, institutional capacity and citizen empowerment.
AI4TRUST – Integrated detection and monitoring platform
Marcello Scipioni – FINCONS; Manuela Preoteasa – Euractiv Romania
AI4TRUST presented an operational platform for analysing social media and online content using multimodal AI tools. Demonstrated capabilities included content-level analysis of text, audio and video; reverse image and video search; deepfake and splicing detection; transcription-based text analysis; and the identification of disinformation signals and check-worthy claims. Two usage modes were highlighted: a rapid, verdict-oriented interface for quick assessments, and an advanced analytical view providing journalists and researchers with detailed evidence and statistics.
Additional components, such as real-time monitoring dashboards and an infodemic observatory, support the screening, tracking and evaluation of content across major platforms, as well as the identification of emerging disinformation waves and harmful narratives at scale. Use cases illustrated how AI outputs are combined with professional judgement, reinforcing the central role of human-in-the-loop verification.
TITAN – Critical thinking through Socratic AI
Antonis Ramfos – ATC
TITAN introduced an approach that shifts the focus from classifying content as true or false to strengthening citizens’ critical thinking skills. The project demonstrated a conversational AI assistant that engages users in structured dialogue, encouraging reflection on misleading claims, argument quality, and source credibility. Grounded in psychology, philosophy, and design research, TITAN applies a Socratic methodology within generative AI systems to help users recognise emotional manipulation, framing effects, weak sourcing, and visual deception. The project frames media literacy as an active cognitive practice, supporting long-term democratic participation and resilience.
AI-CODE – Media literacy and verification tools for professionals
Jasminko Novak – EIPCM; Akis Papadopoulos – CERTH
AI-CODE focused on media literacy and verification tools designed for professional newsroom environments. The PromptED platform was presented as an interactive, GenAI-based coaching environment that helps journalists understand how prompt design influences bias, reliability and accuracy in large language model outputs. Through guided simulations and hands-on experimentation, PromptED supports responsible use of generative AI in journalistic practice.
Complementing this, the project demonstrated the Media Asset Assessment and Management (MAAM) tool, which supports the verification of video, images, text and social media content. Features such as automated tagging and captioning, synthetic media detection, geolocation and image forensics are designed to integrate into collaborative newsroom workflows.
AI4DEBUNK – Assisted debunking tools for citizens and professionals
Kevin El Haddad – University of Mons
AI4DEBUNK presented AI-supported tools aimed at helping both citizens and professionals make more informed decisions when encountering potentially misleading information. A demonstration focused on a browser plugin designed for use by journalists and fact-checkers, while remaining accessible to the general public. The tools do not deliver automated judgements; instead, they support assisted verification by providing multiple analytical signals.
The system is built around modular components that can be used individually or in combination, including similarity search against curated databases, detection of AI-generated media, and out-of-context analysis assessing coherence between images, text and other modalities. Outputs can be aggregated into an indicative score, accompanied by explanations. The tools support text, image and audio content.
PROMPT – Narrative and coordination analysis in the European information space
Clément Bénesse – OPSCI.AI
PROMPT applies NLP methods, large language models and network analysis to study disinformation narratives related to the war in Ukraine, European elections and LGBTQIA+ rights. The project supports analysts and journalists by providing analytical filters that help surface relevant content more efficiently, rather than producing automated judgements.
Three analytical dimensions were presented: narrative analysis using classifiers, embeddings and curated narrative knowledge bases; identification of persuasion techniques and rhetorical devices through LLM-based annotation; and detection of coordinated or inauthentic behaviour using temporal and semantic proximity signals. A dedicated component focuses on Wikipedia as a key shared information resource, analysing page-level signals such as activity intensity, content quality and editing behaviour. Early findings indicate coordinated editing patterns and revisionist strategies on politically sensitive topics.
ELLIOT and humanities perspectives – Understanding the grey zone
Marc Tuters – University of Amsterdam
ELLIOT contributed an ethical, legal and humanities perspective, emphasising the need to complement technical solutions with social and cultural analysis. The presentation highlighted “grey zone” phenomena such as ambient propaganda, engagement hacking and narrative diffusion without clear intent. These perspectives help explain why certain narratives resonate across different contexts and provide valuable input for policy development and regulatory alignment under the AI Act, the Digital Services Act and related EU frameworks.
Closing Remarks: Peter Friess – DG CONNECT
The discussions confirmed that AI-driven disinformation is now structural to the information environment, cheap, fast, and increasingly realistic, making it essential to pair Europe’s regulatory frameworks with consistent enforcement, operational capacity, and cross-border coordination to protect information integrity. Strengthening the European Democracy Shield will require human-in-the-loop tools that support journalists, fact-checkers, researchers and citizens; clearer platform accountability and risk mitigation; interoperable standards and shared infrastructures; and sustained investment in media freedom, trusted data resources, and continuous media and AI literacy so democratic resilience can be built as an ecosystem rather than a one-off response.
Conclusions: cross-cutting messages for the European Democracy Shield
Across the session, several cross-cutting messages emerged. Presentations consistently emphasised the importance of human-AI collaboration, ensuring that AI systems augment rather than replace professional and civic judgement. Media and AI literacy were addressed at both professional and citizen levels, through tools that encourage reflection and participation. Finally, the need for interoperable, commons-based infrastructures and sustainable AI architectures was highlighted as a prerequisite for long-term impact.
Overall, Session 3 demonstrated how AI can support journalists, regulators, and citizens in detecting, assessing and contextualising disinformation, while fostering critical engagement rather than automated decision-making. Together, the results presented offer concrete and actionable building blocks for the European Democracy Shield, linking advanced AI capabilities with democratic values, regulatory compliance, and societal resilience.

