It Always Starts With an Email
A machinist at a defense subcontractor gets an email. It looks like it's from HR. It says there's an update to the benefits portal and they need to verify their credentials. They click. They log in. And just like that, someone who isn't them now has access to systems that handle Controlled Unclassified Information — the kind of data the Department of Defense is legally required to protect.
This isn't a hypothetical. Over 90% of cyberattacks begin with a phishing email. Not a sophisticated zero-day exploit. Not a nation-state hacking team tunneling through your firewall. An email. And the scary part? Most employees who click on phishing emails don't know they did anything wrong. [CISA via Zensec, 2025]
For defense contractors, the stakes are even higher. A successful phishing attack doesn't just cost money — it can mean a failed CMMC audit, a lost contract, and in the worst case, compromised national security data making its way to people who shouldn't have it.
The CMMC reality check: Phase 1 enforcement has been active since November 10, 2025 — self-assessment affirmations are now a condition of contract award. Phase 2 mandatory third-party certification begins November 2026. Here's the number that should concern every defense contractor: only 1% of contractors are currently audit-ready — down from 8% in 2023, then 4% in 2025. Compliance is going down, not up. Not because there are more contractors — because as enforcement got real, companies discovered they were far less compliant than they believed. There are 350,000 contractors requiring certification and only 600 certified assessors. Wait times already exceed 18 months. [Full CMMC Supply Chain Analysis →]
People Learn by Doing, Not by Reading Slides
I've sat through enough security awareness training sessions to know what doesn't work: a slide deck with clipart of a fishing hook, a 45-minute video nobody watches, a quiz everyone retakes until they get 100%. These check the compliance box. They don't change behavior.
What actually works is seeing it in your own inbox. When someone looks at an email they received last Tuesday and realizes — oh, that was a phishing attempt — it lands differently than any hypothetical example ever could. The email is real. The sender is someone they almost trusted. The mistake was almost theirs to make.
That's why I built the Email Threat Analyzer. Not to replace security tooling. Not to compete with enterprise email gateways. To give employees — especially at defense contractors who aren't security professionals — a way to look at their own email and understand what threats actually look like in the wild.
The training insight: Organizations that adopt security behavior change programs — not just compliance training — see employees recognize and report phishing attacks with a 6x improvement in 6 months, and reduce malicious clicks by 87%. The difference is making it personal and practical. [Hoxhunt Phishing Trends Report 2025]
What It Does, in Plain English
The Email Threat Analyzer is a browser-based security tool that connects to your Gmail inbox via Google OAuth — read-only access, nothing stored, nothing sent anywhere it shouldn't go — and runs AI-powered threat analysis on your emails. It maps findings to MITRE ATT&CK techniques, flags indicators of compromise, and explains what to look for in language that doesn't require a security degree to understand.
There are four ways to use it. You can connect your Gmail account directly via Google OAuth. You can upload an exported email file. You can paste raw email headers. Or you can run through six built-in demo samples designed to cover the most common attack patterns — Business Email Compromise, credential phishing, malware delivery, social engineering, and more.
Your AI Security Guide, Not a Chatbot
One of the things I was most deliberate about is that this tool should never make someone feel stupid for asking a question. Security is full of jargon that excludes people — and that exclusion is a security risk in itself. If your employees don't understand why something is dangerous, they can't make good decisions.
CIPHER is the tool's built-in AI analyst. When you open any flagged email, CIPHER is available in a side panel to answer questions about what was found and why it matters. Ask it anything — "what does T1566.002 mean?", "why is a Reply-To mismatch suspicious?", "is this email actually dangerous or just weird-looking?" — and it will give you a straight answer in plain English.
CIPHER updates its context when you open a specific email, so its answers are always relevant to what you're actually looking at — not generic advice. It's not a replacement for a real security analyst, but it bridges the gap between "I got a weird email" and "I know exactly what I'm dealing with."
When I Ran It Against My Own Inbox
The most useful thing I did during development was run the tool against my actual Gmail inbox — 70 real emails from the past 30 days. What I found was genuinely interesting, and it made the tool significantly better.
The false positive problem. The tool kept flagging emails from ALLMODERN, Hotels.com, Glassdoor, and a handful of other legitimate senders as critical threats. At first I thought there was a bug. Then I dug into the raw email data and found something I hadn't expected: these companies were embedding invisible Unicode characters — zero-width spaces and variation selectors — between individual letters of my email address in their tracking URLs. It looked exactly like a known attack technique called Glassworm, which uses the same invisible characters to hide malicious payloads.
Turns out, email marketing platforms do this legitimately for recipient fingerprinting — making each email unique so they can track who clicked what. The technique is identical to the attack. The intent is different. That required me to build detection logic that can tell the difference between a marketing platform's tracking fingerprint and an actual invisible Unicode payload attack.
The Glassworm detection: The tool includes custom detection rules for Glassworm-style invisible Unicode payloads — a March 2026 supply chain attack confirmed across 151+ GitHub repositories, npm packages, and VS Code extensions. These rules were also submitted as open-source detection rules to Sublime Security's community rule feed (PR #4267). Real-world testing revealed that legitimate email tracking uses the same technique, requiring threshold-based detection rather than zero-tolerance matching. Read the full Glassworm analysis → · Try the standalone detector →
This kind of finding — discovering that a legitimate technology and a malicious technique are nearly indistinguishable at the byte level — is exactly the kind of nuance that makes security hard. It's also exactly the kind of thing a good security analyst needs to be able to recognize and explain.
A practical side benefit — filtering marketing noise: Running your inbox through the analyzer surfaces just how many marketing emails you're getting every day. If you're like me and receive hundreds of promotional emails that you end up bulk-deleting anyway, you can use Gmail's built-in filters to automatically archive or delete them before they hit your inbox. In Gmail, go to Settings → Filters and Blocked Addresses → Create a new filter. Filter by sender domain (e.g. from:@glassdoor.com), or use the "Unsubscribe" label Gmail automatically adds to bulk senders. A cleaner inbox also means fewer emails to scan — and fewer opportunities to accidentally click something you shouldn't.
Phishing Simulation for Defense Contractors
Here's where this becomes more than a portfolio project. One of the most effective security awareness techniques is the phishing simulation — you send crafted phishing emails to employees, see who clicks, and then use that as a teaching moment. Companies like KnowBe4 and Proofpoint charge significant money for this service. The Email Threat Analyzer makes it something you can do in a controlled, hands-on workshop format.
The engagement looks like this: before the session, I send simulated phishing emails to employees — crafted to match real attack patterns relevant to the defense industry (fake supplier invoices, IT helpdesk password resets, HR policy updates, government portal credentials). During the session, employees connect their work Gmail or upload the emails, and the tool shows them exactly what they received, why it was suspicious, and what they should have done differently.
The difference between this and a slide deck is that they're looking at an email they actually received. The mistake was almost real. The lesson lands.
Why this matters for CMMC specifically: CMMC Practice AT.2.056 requires organizations to provide security awareness training that includes recognizing and reporting potential indicators of threat — including phishing. A phishing simulation that produces documented results directly satisfies this requirement and gives you evidence for your assessment. It's not a checkbox — it's proof that your people can actually identify threats. [DoD CMMC Program]
| Attack Type | What it looks like | CMMC relevance | Covered by tool |
|---|---|---|---|
| Business Email Compromise | Fake CEO or CFO requesting urgent wire transfer or supplier payment change | AT.2.056 · AC.1.001 | Yes — BEC demo included |
| Credential Phishing | Fake Microsoft 365, VPN, or government portal login page | AT.2.056 · IA.1.076 | Yes — credential harvest demo |
| Malware Delivery | Fake HR policy PDF with hidden .exe, or ZIP with disguised executable | AT.2.056 · SI.1.210 | Yes — malware delivery demo |
| Vendor Invoice Fraud | Lookalike vendor domain requesting banking detail update before payment | AT.2.056 · AC.1.001 | Yes — invoice BEC demo |
| Invisible Unicode Payload | Emails containing hidden Glassworm-style steganography in body or attachments | AT.2.056 · SI.1.211 | Yes — custom Glassworm detection |
Why Email Security Is a Compliance Issue, Not Just an IT Issue
Most people in the defense supply chain think of CMMC as an IT problem. Their IT team handles it. They just have to take a training course once a year. That framing is exactly why 99% of contractors are not currently audit-ready.
CMMC is a people problem as much as it is a technical one. The most common initial access vector in defense-related breaches isn't a firewall misconfiguration. It's a phishing email. An employee clicks a link. A credential gets harvested. An attacker sits quietly inside the network for an average of 254 days before anyone notices. By then the damage is done. [Hunto AI, 2026]
AT.2.056 Security awareness training. Ensure personnel are aware of security risks associated with their activities and of applicable policies and procedures. Phishing simulation with documented results directly satisfies this control.
AT.2.057 Role-based training. Ensure personnel with security responsibilities receive appropriate training. Using this tool in a facilitated workshop creates a documented, role-relevant training record.
SI.1.210 Malicious code protection. Identify, report, and correct information and information system flaws — including those introduced through malicious email attachments and phishing links.
The good news is that awareness training actually works — when it's done right. Organizations with active security behavior programs reduce phishing incidents by 86% and employees report threats 4x more often. The key word is "active." Passive training — a video, a quiz — produces passive results. Hands-on simulation produces behavior change. [ControlD / Cisco Talos, 2025]
See What's in Your Inbox
The tool is free, open source, and live. You don't need an account. You don't need to install anything. If you have a Gmail account and a browser, you can run a threat analysis on your inbox right now.
If you're not ready to connect Gmail, start with the Demo Samples — six built-in emails covering the most common attack patterns, with full AI analysis and CIPHER available to answer questions. It takes about five minutes and gives you a real sense of what the tool does.
If you're a defense contractor or CMMC consultant interested in using this as part of a phishing simulation engagement, I'd be glad to talk through how it fits into your workflow. The tool is built to be run in a facilitated workshop — not just handed to employees and forgotten.
Open source: The detection rules, source code, and Glassworm MQL rules submitted to Sublime Security are all available on GitHub at github.com/yana-ivanov/cybersecurity-portfolio. The Glassworm detection rule is live in PR #4267 on the Sublime Security community rule feed.
Sources
| Source | Stat / Finding | Link |
|---|---|---|
| IBM Cost of a Data Breach Report 2025 | $4.88M average cost per phishing breach | ibm.com/reports/data-breach |
| Verizon DBIR 2025 | 68% of breaches involve human element · 38% lower click rates after training | verizon.com/business/dbir |
| FBI IC3 Annual Report 2024 | $2.77B in BEC losses in the US in 2024 | ic3.gov/AnnualReport |
| Hoxhunt Phishing Trends Report 2025 | 6x improvement in reporting · 87% reduction in malicious clicks · 86% reduction in incidents | hoxhunt.com/guide/phishing-trends-report |
| CISA / Zensec 2025 | Over 90% of cyberattacks begin with phishing | zensec.co.uk/phishing-statistics |
| Hunto AI Phishing Statistics 2026 | 254-day average dwell time after phishing breach | hunto.ai/blog/phishing-attack-statistics |
| CMMC Supply Chain Analysis — Yana Ivanov, 2026 | 1% audit-ready (down from 8% in 2023) · 350K contractors · 600 assessors · 18-month wait times | CMMC Supply Chain Analysis |
| DoD CIO — CMMC Program | CMMC enforcement active November 10, 2025 · AT.2.056 requirements | dodcio.defense.gov/CMMC |
| Sublime Security — PR #4267 | Glassworm invisible Unicode detection rule submission | github.com/sublime-security/sublime-rules/pull/4267 |