Signal is considered one of the most cryptographically secure messaging applications on the planet. Its encryption protocol — the Signal Protocol — is the gold standard adopted by WhatsApp, Google Messages, and dozens of other platforms. Mathematically, it is effectively unbreakable with current technology. Russian intelligence agencies knew this. So they did not try to break it.
On March 9, 2026, the Netherlands' Military Intelligence and Security Service (MIVD) and General Intelligence and Security Service (AIVD) issued a joint public warning confirming a large-scale global campaign by Russian state hackers to seize Signal and WhatsApp accounts belonging to government officials, military personnel, defense employees, and journalists. Dutch government employees had already been confirmed as victims. The technique did not involve cracking encryption, exploiting zero-days, or deploying malware. It involved asking people to hand over their keys — and people did.
Defining Assessment: This is not a story about a broken app. Signal's encryption held. It is a story about the only layer that cannot be patched — the human being using the tool. For defense contractors handling Controlled Unclassified Information under CMMC, this is a direct and present threat. When an employee uses a personal messaging app to share a project update or document link because it is faster than the company's approved system, they create a vulnerability that no amount of technical infrastructure can close.
The technique Russia used is elegant precisely because it exploits human trust in official-looking communications. The attack has two primary variants — both targeting the legitimate security features of the apps rather than any flaw in them.
The attacker creates a Signal or WhatsApp account named "Signal Security Support", "Signal Support Chatbot", or similar. They contact the target directly inside the app with an urgent message: suspicious activity detected, a possible data leak, an attempt to access private messages. To "protect" the account, the target is asked to share their SMS verification code and PIN. The attacker has simultaneously requested re-registration of the target's account on a new device. Signal sends the real code to the target's phone. The target, believing they are helping support, types it back. The attacker owns the account.
Signal supports a legitimate "Linked Devices" feature allowing access from multiple devices simultaneously. Russia weaponized this entirely legitimate feature. Attackers created phishing pages displaying QR codes disguised as Signal group invitations, security alerts, or official device-pairing instructions. When targets scanned the code, they were silently adding the attacker's device as a linked device — granting permanent read access to all messages without triggering any security alert. In one documented case, Russian military intelligence linked Signal accounts recovered from captured battlefield devices to their own servers, gaining access to Ukrainian military communications.
Critical insight: GRU unit APT44 (Sandworm) linked Signal accounts from captured battlefield devices to their own servers — gaining access to Ukrainian military communications from devices taken during combat. The same technique is now confirmed against NATO government targets globally.
The most dangerous misconception in enterprise security today is the belief that end-to-end encryption makes a communication channel safe for sensitive work. It does not. Encryption protects data in transit between two points. It says nothing about the security of the endpoints, the behavior of the users, or what happens when one endpoint is silently handed to an adversary.
The transit channel. Every message sent between the two devices was mathematically scrambled using the Signal Protocol. No one on the network — not ISPs, not government intercept, not even Signal itself — could read the content in transit.
The encryption worked. Russia never attacked it. Attempting to break Signal Protocol encryption with current technology would take longer than the age of the universe.
The math was perfect. The lock was unbreakable. The door it was protecting was left open by a human who did not know better.
The human endpoint. Once Russia convinced the target to hand over their verification code, they became a legitimate linked device. The encryption now protected Russia's access just as faithfully as the original user's.
The app worked exactly as designed — encrypting and delivering messages to all linked devices including the attacker's. Security features became the attack vector.
No patch fixes this. No algorithm prevents a person from handing over their key when they believe they are talking to official support.
A significant reason this attack succeeds at scale is that users conflate "encrypted" with "secure for work." The differences between apps matter enormously in a defense contractor context — particularly the widely misunderstood Telegram encryption model.
| App | Encryption | Enterprise Control | CUI Status |
|---|---|---|---|
| E2E encrypted in transit | Personal accounts — no corporate control. Metadata to Meta servers. | Not for CUI | |
| Telegram | Default chats NOT E2E — stored on Telegram servers. Only "Secret Chats" are E2E, and they don't work in groups. | No corporate control. Widely misunderstood encryption model. | Not for CUI |
| Signal | Strong E2E encryption — confirmed unbroken | Personal accounts — confirmed Russian target. No enterprise audit trail. | Not for CUI |
| Slack Enterprise | Encrypted in transit and at rest | SSO + MFA enforced, corporate-controlled, audit logs, data stays within org boundary. | Acceptable |
| DoD-Approved Platforms | FIPS-validated cryptography | DISA-approved, FedRAMP authorized — designed for CUI handling. | CUI Compliant |
The Telegram misconception explained: Telegram uses client-server encryption for regular chats — Telegram's servers can read those messages. Only "Secret Chats" are end-to-end encrypted, and they do not work in group chats at all. An employee who believes Telegram is more secure than WhatsApp because they "heard it uses encryption" may actually be on a less secure platform than they realize.
The Russian campaign targets government officials and military personnel — but the threat surface extends directly to every defense contractor whose employees communicate about work on personal devices. The defense industrial base operates in a hybrid environment: employees handle sensitive procurement details, technical specifications, contract information, and project communications that may touch CUI — often on devices and apps their employer does not control.
A consistent pattern emerges across organizations that handle sensitive data: employees default to familiar personal apps not because they intend to violate policy, but because the approved tool feels less convenient and they genuinely do not understand why the distinction matters. The Dutch intelligence report confirmed this explicitly — government officials used Signal for sensitive communications not because their approved secure systems failed, but because Signal was the app they already knew. Convenience overrode compliance.
This behavior is entirely predictable and entirely addressable — but only through training that explains the real risk in practical terms, not through policy documents that employees file and forget.
Analyst observation: In real-world security awareness testing, phishing failure rates at companies with dedicated security teams — even those handling highly sensitive data — consistently run 20–40% on first exposure. Individuals who fail are not generally careless; they lack specific knowledge of what a particular attack looks like. The fake Signal support message exploits exactly this gap. A defense contractor employee who has never seen a credential-harvesting attempt inside an encrypted app has no frame of reference to recognize it.
CMMC Level 2 contains explicit controls governing where and how Controlled Unclassified Information can be handled, transmitted, and stored. When an employee sends a work document via WhatsApp, discusses project details in a Signal group, or shares a contract update in a Telegram channel — they are almost certainly moving CUI outside the controlled information environment their organization is responsible for protecting. This is not a theoretical compliance concern. It is an active intelligence collection opportunity for Russian state actors who are confirmed to be operating against exactly these channels right now.
The supply chain dimension: CMMC requirements flow down from prime contractors to subcontractors under DFARS 252.204-7021. A small precision machining shop in Connecticut supplying components to Electric Boat, with employees sharing project photos via WhatsApp, is a valid intelligence target and a potential entry point into the broader submarine program. Russia does not need to compromise Electric Boat directly when it can read the communications of their supply chain.
Technical controls can restrict unauthorized app usage. Policies can prohibit personal messaging for work. But neither addresses the root cause: people who have never been shown what a social engineering attack looks like inside an encrypted messaging app do not know to be suspicious. The attack succeeds because the target has no mental model for it.
Security awareness testing consistently shows that employees who experience a simulated phishing event and receive targeted training afterward become significantly better at identifying future attempts — not just the same attack type, but novel variations. The exposure creates a cognitive pattern that generalizes. For defense contractors, this means the question is not whether to train employees on social engineering. It is whether to train them before or after a Russian intelligence agency reads their Signal messages.
Policy perspective: CMMC security awareness training should not be positioned to employees as a compliance requirement — it should be positioned as a professional skill that protects them personally as well as the organization. Employees who understand why Signal is unsuitable for CUI communications — not just that it is forbidden — are dramatically more likely to comply consistently, including in edge cases the policy never explicitly anticipated.
The Russian Signal campaign is not a new threat category that CMMC failed to anticipate. Every element of this attack — unauthorized communication channels, credential disclosure, absence of employee training, unapproved application usage — maps directly to existing CMMC Level 2 requirements. The gap is not in the framework. It is in implementation.
Russia did not break Signal. They did not need to. The most sophisticated cryptographic protocol in consumer messaging is irrelevant when the person holding the keys can be convinced to hand them over by a message that looks like it comes from support. This is not a new insight — social engineering has been the dominant attack vector for decades — but it lands with particular force in the context of an app that has built its entire brand on being unbreakable.
The lesson for the defense industrial base is not that employees should stop using encrypted messaging apps. It is that encryption is a technical control that addresses a technical threat. Social engineering is a human threat that requires a human control — training, awareness, clear policy, and approved alternatives that make compliance the path of least resistance.
CMMC already requires all of this. The controls exist. The framework is in place. The gap is the distance between a policy document that says "use approved communication channels" and an employee who genuinely understands why that rule exists, what a violation looks like in practice, and what to do when they receive a message from an account called "Signal Security Support."
Final Assessment: Russian intelligence agencies are actively reading the Signal and WhatsApp conversations of government officials, military personnel, and defense-adjacent professionals right now. The encryption is working. The humans are the vulnerability. For Connecticut defense contractors in CMMC Phase 1 compliance — many still treating security awareness training as a checkbox — this campaign is a direct and current threat. The cost of addressing it is a few hours of well-designed onboarding training and a clear policy on approved communication tools. The cost of not addressing it is a Russian intelligence analyst reading conversations about engine components, submarine delivery schedules, and helicopter maintenance contracts.
One rule that stops this attack: Your SMS verification code and PIN will never be requested by any legitimate support service through a chat message — ever. If any account, regardless of how official it looks, asks for this code inside Signal, WhatsApp, or any other messaging app: close the conversation, do not share the code, and report it to your security team immediately.
All findings in this report are based on publicly available information including the joint advisory from the Netherlands MIVD and AIVD (March 9, 2026), Google Threat Intelligence reporting, TechCrunch, NBC News, Malwarebytes, SecurityAffairs, Recorded Future News, and Infosecurity Magazine. This represents the author's independent analysis and does not reflect the views of any employer or client organization.
Yana Ivanov is a security analyst and CMMC compliance consultant based in Connecticut, specializing in cybersecurity risk assessment for defense contractors in the Connecticut defense industrial base. With 15 years of enterprise technology experience and an MS in Information Systems, she brings a practitioner perspective to threat intelligence analysis. She is currently pursuing CompTIA Security+ and CMMC Registered Practitioner certification, with a focus on helping defense supply chain companies achieve genuine — not checkbox — security compliance. This analysis was produced independently as a contribution to the security community's understanding of active threats against US defense infrastructure.