
ENISA AI Hallucinations: Fabricated Sources Found in Official EU Threat Reports
ENISA had to revise its Threat Landscape reports after researchers flagged broken and fabricated citations consistent with AI hallucination patterns. When references fail, the trust layer fails.
Broken links reported in one ENISA publication
ENISA revision notice dated 09 January 2026
Opening: When Official References Cannot Be Verified
ENISA is supposed to be one of Europe's most trusted reference points for threat intelligence and cybersecurity guidance. That trust took a direct hit after researchers flagged that two recent ENISA publications contained broken and seemingly fabricated citations, a pattern consistent with large language model style "hallucinations." The issue is not cosmetic. In threat reporting, citations are part of the security boundary: they let defenders validate claims, reproduce analysis, and confidently translate narrative insights into controls, budgets, and procurement decisions. When the sources cannot be verified with a click, the entire chain of trust starts to fail.
What Happened: Broken Footnotes Raised Red Flags in Two ENISA Reports
The story begins with basic hygiene: readers clicked links. Researchers from Westfälische Hochschule reviewed ENISA reports released in October and November 2025 and found that a noticeable number of footnote URLs did not resolve. Dead links alone are not unusual on the public internet, but the concentration and characteristics of the failures were hard to explain as ordinary "link rot." In coverage of the October report, roughly 26 out of 492 referenced links reportedly returned a 404 error, and investigators attempted to validate whether the pages had ever existed by checking web archives as well. When neither the live web nor archives supported the references, the issue moved from "outdated URLs" to "nonexistent sources."
Several links showed characteristics inconsistent with legitimate URL rot, including embedded entity names that did not match publisher conventions
Several details strengthened the suspicion that the citations were not simply mistyped. One example highlighted in reporting is a URL that appeared to embed the name "APT29" for a Microsoft related reference. That is a subtle but telling error because Microsoft's public naming conventions for certain groups can differ from common industry labels, and a fabricated link can easily reflect the model's semantic guess rather than the publisher's real URL structure. Researchers described the experience as a failure of verifiability: a public authority published references that could have been validated instantly, yet were not. The result was an avoidable credibility crisis attached to documents that are routinely treated as authoritative inputs for European cyber risk discussions.
Why "Hallucinated Citations" Are a Security Problem, Not an Editorial One
In cybersecurity reporting, citations are not decorative. They are operational. Threat landscape documents frequently serve as a bridge between strategic narratives and tactical implementation, shaping how organizations prioritize risks, justify investments, and select controls. If the citations are unreliable, readers cannot easily distinguish between evidence-based findings and unverified synthesis. That matters when these reports inform decisions such as where to allocate incident response capacity, which sectors need additional resilience funding, or what threat trends should drive national level guidance.
There is also a compounding risk unique to official publications: downstream reuse. Public-sector decks, internal policy memos, vendor presentations, and media summaries often quote "ENISA said…" without rechecking primary sources. A single broken citation can propagate into dozens of secondary artifacts that look credible precisely because they inherit ENISA's brand authority. In the worst case, this becomes a misinformation amplifier inside professional ecosystems, not because anyone intended to mislead, but because the validation layer failed at the root.
ENISA's Response: "Human Error" and a Quiet Revision to Edit Links
When questioned, ENISA acknowledged deficiencies and attributed the problem to human error, stating that AI was permitted only for minor editorial revisions. In other words, the agency's position is not "AI wrote the report," but rather "AI assisted with limited edits and something went wrong in the process." That distinction matters, yet it also raises the most uncomfortable point: even if generative AI is only used for polishing, it can still introduce high-impact errors if it touches the trust layer, including URLs, citations, and references. Threat reporting workflows cannot treat citations like ordinary prose.
Date of ENISA revision notice confirming links were edited in Version 1.2
ENISA has since updated the Threat Landscape 2025 publication with a formal revision notice. The ENISA publication page states that Version 1.2, dated 09 January 2026, updated the document to edit some links. That is an implicit confirmation that link integrity was part of the remediation scope, and it also provides a timestamp defenders can use when referencing the corrected version. The update helps, but it does not fully solve the broader governance question: what process controls ensure that future revisions do not accidentally rewrite evidence trails again.
How Organizations Can Respond: Build a "Citation Integrity Pipeline" Like a Security Control
This incident is useful beyond the headline because it offers a concrete model for governance. If your organization publishes threat intelligence, security advisories, audit reports, compliance guidance, or even internal playbooks, you should treat references as artifacts that must pass validation gates. The simplest control is automation: a link checker as part of the publication pipeline, failing the build if references do not resolve or redirect unexpectedly. This is not theoretical. It is exactly the sort of continuous verification security teams already do for infrastructure as code, container images, and software dependencies.
The second control is separation of duties between narrative editing and evidence handling. If generative AI is used, restrict it away from citations entirely: lock the bibliography, references, and URLs behind a controlled workflow, and allow text edits only within bounded sections. Combine that with archiving practices for critical references, such as storing a snapshot of key sources or using a stable archiving mechanism so that readers can verify claims even if the original page disappears. The goal is straightforward: reproduce trust, not just content.
Lessons Learned and Industry Implications: AI in Public-Sector Cyber Reporting Needs Measurable Trust Controls
This is not a scandal because an agency experimented with modern tooling. The failure is process maturity. Public institutions are increasingly asked to do more with limited staff, and AI will be tempting for drafting, summarization, translation, and editorial consistency. The problem is that cybersecurity publications sit in a high-stakes category: they influence risk perception, drive investment decisions, and shape cross-border collaboration. When credibility slips, the cost is not only reputational. It can translate into slower adoption of guidance, higher skepticism toward future warnings, and fragmentation as stakeholders rely on alternative sources.
The practical takeaway is that "AI usage policy" must be paired with "trust metrics." That means publishing change logs, versioning, and revision notices, and implementing validation steps that are auditable. If the organization can quantify citation accuracy, link health, and review coverage, it can use AI productively without undermining confidence. The most effective outcome of this episode would be a shift toward transparent publication hygiene across the entire security ecosystem, especially where authority and trust are central to mission delivery.
Related Topics:
- [Zero Trust Network Access (ZTNA) explained](https://anavem.com/explanations/what-is-[ztna](/glossary/ztna "GLOSSARY:ZTNA (Zero Trust Network Access):A security model that grants access to applications based on identity verification and context, regardless of network location.:")-zero-trust-network-access-explained) - Understanding verification-first security architectures
- [EDR vs XDR vs MDR differences for security teams](https://anavem.com/explanations/[edr](/glossary/edr "GLOSSARY:EDR (Endpoint Detection and Response):EDR is a cybersecurity technology that continuously monitors endpoints to detect, investigate, and respond to malicious activity.:")-vs-xdr-vs-mdr-differences-explained) - Choosing the right detection and response approach
- Conditional Access for enterprise risk reduction - Implementing policy-based access controls
Frequently Asked Questions
Reporting indicates strong signs consistent with LLM style errors in citations, while ENISA has described the issue as human error with AI limited to minor editorial revisions. The key operational point is that AI touching references can still break verifiability. The remediation and controls matter more than the label.
Because citations are how defenders validate claims and trace statements back to primary sources. If the evidence trail fails, organizations cannot confidently operationalize the findings. In regulated or public environments, this also creates governance and accountability gaps.
ENISA's public position in coverage is that the statements remain valid even if references were affected, and the agency updated the publication to edit some links. Readers should still prefer the revised version and treat unverifiable claims cautiously.
Reference a specific version and date, keep local copies of critical documents, and validate key claims by checking primary sources. For high-impact decisions, document the validation steps as part of risk governance.
Use automated link validation, lock reference sections from AI edits, and require a human evidence review before release. Add revision notices and change logs so readers can track corrections. Treat references as a controlled artifact, not editable prose.
Related Incidents
View All
MediumDiscord "Breach" Claim Looks Like Mass Scraping: HawkSec Auctions 78,541,207-File Dataset From Public Servers
A new "Discord breach" claim circulates as HawkSec promotes an auction of 78,541,207 files. Early indicators suggest lar...
HighIndia Becomes the Primary Mobile Malware Target: Why Attackers Concentrate on Phones, Payments, and "Trusted" App Stores
India's position as the most targeted region for mobile malware is not an abstract statistic. It is an operational reali...
HighAPT28 Credential Harvesting Campaign Targets Energy Researchers and Policy Networks with Stealth "Redirect-to-Real" Phishing
An APT28 credential harvesting campaign has resurfaced with a familiar objective and a refined delivery: steal logins fr...
Comments
Want to join the discussion?
Create an account to unlock exclusive member content, save your favorite articles, and join our community of IT professionals.
New here? Create a free account to get started.