Lithuania’s Strategic Response to AI-Driven Cyber Threats
In an era where technological advancements are rapidly transforming economies, governance, and daily life, the acceleration of innovation brings with it an escalation of digital risks. For Lithuania, a nation deeply integrated with digital infrastructures—from e-signatures to electronic health records—the imperative for robust cybersecurity measures has never been more pressing.
Recognizing cybersecurity as a multifaceted societal challenge, Lithuania has initiated a comprehensive, government-funded program aimed at fortifying the nation’s digital resilience. This initiative, orchestrated by the Innovation Agency Lithuania, seeks to bridge the gap between scientific research and practical, market-ready innovations that enhance e-security. Universities and industry leaders are collaborating to test and implement these solutions within public institutions and critical infrastructure sectors.
Martynas Survilas, Director of the Innovation Development Department at the Innovation Agency Lithuania, emphasizes the collaborative nature of this endeavor:
Our goal is to turn Lithuania’s scientific potential into real impact—solutions that protect citizens, reinforce trust in digital services, and help build an inclusive, innovative economy. The era of isolated research is over. In practice, science and business must work together to keep pace with complex, multilayered threats.
A National Mission: Safe and Inclusive E-Society
Among the strategic national missions under this program, the Safe and Inclusive E-Society stands out for its relevance to the global digital landscape. Coordinated by Kaunas University of Technology (KTU), this mission aims to bolster cyber resilience and mitigate personal data breach risks, focusing on everyday users of both public and private e-services. With a budget exceeding €24.1 million, the mission is a significant investment in Lithuania’s digital future.
The KTU-led consortium comprises leading Lithuanian universities—Vilnius Tech and Mykolas Romeris University—and prominent cybersecurity firms such as NRD Cyber Security, Elsis PRO, Transcendent Group Baltics, and the Baltic Institute of Advanced Technology. Additionally, industry association Infobalt and the Lithuanian Cybercrime Competence, Research and Education Center are integral partners.
The mission’s research and development efforts encompass a wide array of cybersecurity challenges:
– Smart, Adaptive, and Self-Learning Buildings: Developing intelligent infrastructure that can autonomously detect and respond to cyber threats.
– AI-Driven Defense Systems for FinTech: Creating advanced systems to protect financial technology companies and their users from fraud and data breaches.
– Industrial Safety Enhancements: Prototyping threat-detection sensors for critical infrastructure to preemptively identify and neutralize potential risks.
– Hybrid Threat Management Systems: Tailoring solutions for public safety, education, and business environments to manage and mitigate diverse cyber threats.
– Combating Disinformation: Utilizing AI models to automatically detect and counteract coordinated bot and troll activities.
– Automated Cyber Threat Intelligence Platforms: Developing intelligent platforms for real-time analysis and response to cyber threats.
AI Fraud: A New Kind of Threat
Dr. Rasa Brūzgienė, Associate Professor at KTU’s Department of Computer Sciences, highlights the transformative impact of Generative Artificial Intelligence (GenAI) and Large Language Models (LLMs) on cyber fraud:
Until now, the main defense relied on pattern-based detection—for example, automated filters and firewalls could recognize recurring fraud patterns, typical phrases or structures. However, GenAI has eliminated that ‘pattern’ boundary. Today, criminals can use generative models to create contextually accurate messages. Models know how to write without grammatical errors, use precise terminology, and even replicate the communication style of institutions. This means that modern phishing emails no longer resemble ‘classic fraud’ but become difficult to recognize even for humans, let alone automated filters.
The scale and sophistication of these attacks have evolved significantly:
– Increased Scale: GenAI enables the automated generation of thousands of unique, non-repeating fraudulent messages.
– Enhanced Quality: These messages are personalized, multilingual, and often based on publicly available information about the victim, making them highly convincing.
Traditional firewalls and spam filters are losing effectiveness as they can no longer rely on formal features of words, phrases, or structures to detect fraud. Modern attacks are characterized by their realism, making them indistinguishable from legitimate communication.
Cybercriminals now have access to a broad arsenal of AI tools, including models like GPT-4, GPT-5, Claude, and open-source alternatives such as Llama, Falcon, and Mistral. Malicious variants like FraudGPT, WormGPT, or GhostGPT are specifically designed for illicit activities. Voice cloning technologies like ElevenLabs and Microsoft’s VALL-E can replicate voices from just a few seconds of audio. For creating fake faces and videos, tools like StyleGAN, Stable Diffusion, DALL-E, and DeepFaceLab are employed, along with lip-sync solutions like Wav2Lip and First-Order-Motion.
These tools are often orchestrated together to produce photorealistic images, deepfake videos, and meticulously edited documents. LLMs generate high-quality, personalized phishing texts and onboarding dialogues, while TTS and voice-cloning models recreate a victim’s or employee’s voice. Image generation tools produce ‘liveness’ videos that can deceive verification systems. Automated AI agents handle the rest—creating accounts, uploading documents, and responding to challenges. These multimodal chains can bypass both automated and human verification based on trust.
The accessibility of these tools is particularly concerning. Commercial TTS solutions like ElevenLabs and open-source implementations of VALL-E provide high-quality voice cloning to anyone. Stable Diffusion, DeepFaceLab, and similar tools make it easy to generate photorealistic images or deepfakes quickly. As a result, a single operator can create hundreds of convincing, different, yet interconnected fake profiles in a short time. Such cases are already being observed in attempts to open fake accounts in financial institutions and cryptocurrency platforms.
AI-Powered Social Engineering
A new frontier in cybercrime is adaptive AI-driven social engineering. Attackers no longer rely on static scripts; they use LLMs that adapt to a victim’s reactions in real time.
Bots begin with automated reconnaissance, scraping social media, professional directories, and leaked databases to build personalized profiles. The LLM then crafts initial messages that mirror a person’s professional tone or institutional language. If there’s no response, the system automatically switches channels—from email to SMS or Slack—and changes tone from formal to urgent. If a target hesitates, the AI generates plausible reassurance, quoting real internal policies or procedures.
In a typical scenario, a colleague writes via work email, follows up on LinkedIn, and then calls using a cloned voice—all orchestrated by connected AI tools. Dr. Brūzgienė describes this as a new stage of cybercrime evolution:
Social engineering has become scalable, intelligent, and deeply personal. Each victim experiences a unique, evolving deception designed to exploit their psychological and behavioral weak points.
Lithuania’s Cyber Defense Leadership
Lithuania’s digital ecosystem, known for its advanced e-government architecture and centralized electronic identity (eID) systems, faces unique challenges. However, it also demonstrates remarkable progress. The country has risen steadily in international indices, ranking 25th globally in the Chandler Good Government Index (CGGI) and 33rd in the Government AI Readiness Index (2025).
Lithuania’s AI strategy (2021–2030), updated in 2025, has prioritized AI-driven cyber defense, anomaly detection, and resilience-building. The National Cyber Security Centre (NKSC) integrates AI into threat monitoring, reducing ransomware incidents by fivefold between 2023 and 2024. Collaboration with NATO, ENISA, and EU partners further enhances Lithuania’s hybrid defense capabilities.
Martynas Survilas underscores the broader implications of cyber resilience:
We see cyber resilience not just as a technical task but as a foundation for democracy and economic growth. Through the safe and inclusive e-society mission, we are not only protecting our digital infrastructure but also empowering citizens to trust and participate in the digital world. AI will inevitably be used for malicious purposes, but we can also use AI to defend. The key is collaboration across sectors and continuous education. This mission is one of the tools helping us turn that idea into concrete projects, pilots, and services for people in Lithuania.