Portfolio
CMMC CYBERSECURITY TRAINING
Defense Supply Chain Onboarding — Complete 14-Module Training
0% Complete
Required Training — Defense Contractors

CMMC Cybersecurity
Onboarding Training

This training is required for all personnel who work with or around Controlled Unclassified Information (CUI) on defense contracts. You will learn what CUI is, who is targeting it, and exactly what you must do — and not do — to protect it.

350,000+
Defense contractors now subject to mandatory CMMC cybersecurity requirements
$4.6M
False Claims Act settlement paid by MORSECORP in 2025 for cybersecurity non-compliance — triggered by one whistleblower
30 GB
F-35 design data stolen from a subcontractor whose administrator password was "admin"
5 years
How long Volt Typhoon operated undetected inside US critical infrastructure — by deleting audit logs

Training Modules — Complete Curriculum

01
What Is CUI and Why You Are a Target
~20 min
02
The Threat Is Real — Real Breaches, Real Costs
~25 min
03
Phishing, Spearphishing, and APT5's Playbook
~20 min
04
Password & Credential Security
~20 min
05
CUI Handling — What You Can and Cannot Do
~20 min
06
Insider Threat — Recognizing and Reporting
~20 min
07
Executive Exposure — The SPRS Affirmation & Legal Risk
~20 min
08
After a Breach — Reporting Requirements & Your 72-Hour Duty
~20 min
09
Mobile Devices & Remote Work — BYOD, VPN, and Working Away from the Office
~20 min
10
Social Media & Public Disclosure — What You Can and Cannot Post
~20 min
11
Physical Security & Visitor Control — Badges, Tailgating, and Clean Desk
~20 min
12
Software, Updates & Configuration — Shadow IT and Patch Hygiene
~20 min
13
AI Tools & CUI Risk — ChatGPT, Copilot, and the Compliance Gap
~20 min
14
Annual Recertification — Highest-Risk Topics Refreshed
~15 min

Each module ends with a 5-question quiz. You need 80% or higher to pass and unlock the next module. You may retry failed quizzes. A completion certificate is generated when all 8 modules are passed.

Module 01 of 14

What Is CUI and Why You Are a Target

Before you can protect sensitive defense information, you need to understand what it is, what makes it valuable to adversaries, and why your role — regardless of job title — puts you in the middle of a national security equation.

What Is Controlled Unclassified Information?1.1

Controlled Unclassified Information — CUI — is government information that is sensitive enough to require protection but is not classified. It lives in a specific category defined by Executive Order 13556 and the DoD CUI Program.

CUI is not stamped SECRET or TOP SECRET. It does not require a security clearance to access. But it is legally controlled, and anyone who handles it — at any tier of the defense supply chaini — is legally obligated to protect it.

The Key Point: You do not need to be a cleared facility or handle classified documents to be subject to CUI requirements. If your company works on a DoD contract and you touch technical drawings, specifications, contract data, engineering models, or certain communications — you are almost certainly handling CUI right now.

What Does CUI Actually Look Like?1.2

CUI is not an abstract category. It is the specific files, drawings, and data you work with every day. Here are common examples in defense manufacturing and contracting environments:

  • Technical drawings and CAD files — specifications, tolerances, materials, assembly instructions for defense components
  • Engineering models and simulation data — performance characteristics, test results, failure analysis
  • Contract documents and statements of work — program names, delivery schedules, pricing structures
  • Export-controlled technical data — anything subject to ITARi or EAR regulations
  • Procurement and supplier information — vendor lists, pricing, supply chain structure
  • Personnel security information — background check data, clearance status
  • Network diagrams and system documentation — anything describing the IT systems that process defense data

If you are unsure whether something is CUI, treat it as CUI until you can confirm otherwise with your supervisor or security officer. The cost of over-protecting information is zero. The cost of under-protecting it can be a federal criminal investigation.

Why Does CUI Matter to Foreign Adversaries?1.3

Nation-state adversaries — primarily China, Russia, Iran, and North Korea — invest billions of dollars annually in stealing US defense technology. The goal is not random disruption. It is systematic reconstruction of American military capability without paying the research and development cost.

A single technical drawing from a defense subcontractor does not give an adversary a complete weapon system. But combined with drawings from other suppliers, performance data from program databases, and contractor communications intercepted over time, it contributes to a comprehensive intelligence picture. The adversary is building a puzzle. Every CUI document you allow to leave your protected environment is a piece they do not have to steal from somewhere harder.

Real Case — F-35 Joint Strike Fighter

A small Australian subcontractor on the F-35 program suffered a breach. Approximately 30 gigabytes of F-35 design data was stolen — including detailed airframe specifications. The attacker accessed the network using the domain administrator account. The password had never been changed from the manufacturer default: "admin." The program this data came from will cost US taxpayers $1.5 trillion over its lifespan. The breach was enabled by one unchanged password.

The Supply Chain Is the Attack Surface1.4

Prime contractors like Electric Boat, Pratt & Whitney, and Sikorsky invest heavily in cybersecurity. They have dedicated security teams, sophisticated technical controls, and years of compliance maturity. Adversaries know this. They do not attack the front door of a heavily defended fortress. They find the subcontractor three tiers down whose network has default passwords and no security monitoring.

1%of defense contractors are fully audit-ready for CMMC
99%are potential weak links in the supply chain
350K+contractors subject to mandatory CMMC requirements

This means that your organization — regardless of its size — is a node in a security architecture that ultimately protects American military capability. A breach at your facility is not just your company's problem. It is a supply chain problem that potentially affects every program that flows through your work.

The Adversary's Logic: Why attack Lockheed Martin's hardened network when you can achieve the same intelligence outcome by compromising a 25-person machining shop that handles the same technical drawings and whose IT environment consists of one unpatched server and a shared administrator password?

CMMC — What It Requires and What It Means for You1.5

The Cybersecurity Maturity Model Certification (CMMC) is the DoD's mandatory framework for verifying that defense contractors protect CUI properly. As of November 10, 2025, CMMC requirements are active in defense contracts. As of November 10, 2026, mandatory third-party certification becomes a condition of contract award for Level 2 contracts.

CMMC is not an IT department project. It is an organization-wide standard that requires every employee who touches CUI to understand and follow specific security practices. The 110 controls in NIST SP 800-171i that CMMC enforces are not abstract technical requirements — they are specific behaviors that prevent specific attacks that have already happened to organizations like yours.

Your Role: You do not need to understand all 110 controls. You need to understand the ones that apply to your daily work — how you handle files, how you use your computer, how you recognize and report threats. That is what this training covers. The technical controls are implemented by IT. The human behavior controls are implemented by you.

MODULE 01 QUIZ — 5 Questions — Minimum 80% to Pass
Question 1 of 5
Which of the following best describes Controlled Unclassified Information (CUI)?
AOnly documents stamped SECRET or TOP SECRET by the government
BSensitive government information that requires protection but is not classified
CAny document created by a defense contractor, regardless of content
DInternal HR and payroll records kept by government agencies
Question 2 of 5
In the F-35 subcontractor breach described in this module, how did the attacker gain access to the network?
ABy exploiting an unpatched software vulnerability in the company's firewall
BThrough a spearphishing email sent to the CEO's personal Gmail account
CUsing the domain administrator password that had never been changed from "admin"
DBy bribing an employee to copy files to a USB drive
Question 3 of 5
Which of the following is an example of CUI that a defense machining subcontractor might handle?
AThe company's publicly posted job listings and HR forms
BTechnical drawings and specifications for defense components received from the prime
CTrade magazine articles about manufacturing techniques
DInternal employee birthday announcements
Question 4 of 5
Why do adversaries prefer to target small subcontractors rather than prime contractors directly?
ASmall subcontractors handle more valuable information than prime contractors
BPrime contractors are not subject to CMMC requirements
CSmall subcontractors often have weaker security, yet handle the same technical data as primes
DAttacking subcontractors is not illegal under international law
Question 5 of 5
If you are unsure whether a document or file you are working with is CUI, what should you do?
ATreat it as regular unprotected information until it is officially labeled CUI
BTreat it as CUI and confirm with your supervisor or security officer
CEmail it to yourself at home so you can review it in a safe environment
DDelete it immediately to avoid any compliance risk
Module 02 of 14

The Threat Is Real — Real Breaches, Real Costs

Abstract threats do not change behavior. Specific, documented cases do. This module covers three confirmed breaches and one enforcement action that collectively demonstrate what happens when cybersecurity is treated as an IT department problem rather than everyone's responsibility.

The MORSECORP Case — A Whistleblower, Five Years, $4.6 Million2.1

On March 26, 2025, the Department of Justice announced that MORSECORP, Inc. — a Cambridge, Massachusetts defense contractor serving the US Army and Air Force — agreed to pay $4.6 million to settle False Claims Acti allegations. This was the first major FCA settlement based specifically on cybersecurity non-compliance.

What Happened — MORSECORP, 2018–2023

Between 2018 and 2023, MORSECORP admitted to two failures. First, it used a cloud email provider that did not meet the required FedRAMP security standard. Second — and more damaging — it submitted inaccurate compliance scores to the government database (SPRS) claiming its cybersecurity met requirements. After a third-party assessment produced a substantially lower, failing score, the company did not update its SPRS record. It continued billing the Army and Air Force under contracts that required cybersecurity compliance it was not maintaining. A whistleblower — an insider at the company — filed a complaint under the False Claims Act. The government intervened. The settlement: $4.6 million plus interest. The whistleblower received $851,000 — 18.5% of the recovery.

What this means for you: Every employee who works at a defense contractor that is not maintaining its stated cybersecurity controls is a potential whistleblower. The financial incentive is real — 15 to 30% of whatever the government recovers. You do not need to report malicious intent. You need to report what you observe.

Volt Typhoon — Five Years Inside US Infrastructure2.2

Volt Typhoon is a Chinese nation-state threat actori documented by the FBI, NSA, and CISA. Beginning as early as 2019, Volt Typhoon penetrated US power grids, water systems, communications networks, and transportation infrastructure. They operated inside these networks for years — sometimes more than five years — without detection.

Their technique is called "living off the landi" (LOTL): using the legitimate tools already installed on compromised systems rather than deploying malwarei that security software would detect. They were invisible because they looked like normal system activity.

How They Stayed Invisible — The Audit Log Deletion

One of Volt Typhoon's consistent behaviors was systematically deleting audit logs after each session. Audit logs are the digital record of who accessed what system, when, and what they did. Without audit logs, a security team cannot reconstruct what happened during a breach. By deleting them, Volt Typhoon erased the evidence of their own presence. This is why NIST 800-171 Control 3.3 — protecting audit logs from unauthorized deletion — exists. It is not abstract. It is a direct response to a documented, confirmed adversary technique that allowed foreign access to US infrastructure for five or more years.

The Stryker Corporation Wiper Attack — March 20262.3

In March 2026, Iranian threat actor Handala — operating on behalf of Iran's Ministry of Intelligence — conducted a destructive cyberattack against Stryker Corporation, a major US medical device and defense contractor. The attack remotely wiped approximately 200,000 devices across 79 countries.

How a Single Credential Destroyed 200,000 Devices

Handala obtained credentials for a Stryker Microsoft Intune administrator account — the platform used to manage all of the company's enrolled devices. With that single administrator account, they issued a remote wipe command to every device enrolled in the system. 200,000 devices. 79 countries. One compromised admin credential. The attack succeeded because the admin account lacked multi-factor authentication, there was no behavioral monitoring to detect the unusual mass-wipe command, and there was no Multi-Admin Approval requirement that would have forced a second administrator to confirm such a destructive action.

The Scale of One Credential: In a modern enterprise, a compromised administrator account is not a minor incident. It is a master key. This is why CMMC requires multi-factor authentication for privileged accountsi, behavioral monitoring, and separation of administrative duties. These controls exist because this exact scenario has already happened — at a company that builds products for the US military.

Russian Targeting of Every Tier — From 2020 to Today2.4

From at least January 2020 through 2022, the FBI, NSA, and CISA jointly documented a sustained Russian state-sponsored campaign targeting US cleared defense contractorsi at every tier of the supply chain — not just prime contractors, but small and mid-sized subcontractors with varying levels of cybersecurity maturity.

The targeted programs included command and control systems, intelligence and surveillance systems, weapons and missile development, and combat systems. The attackers used credential harvesting, spearphishingii, and exploitation of known software vulnerabilities to gain access. In many cases, the initial entry point was a small supplier whose defenses were significantly weaker than the prime contractors further up the chain.

The Pattern: Nation-state actorsi have identified the same vulnerability you are reading about in this training — that small subcontractors are the weakest link in a heavily defended supply chain. They have been systematically exploiting that gap for years. CMMC is the government's response. Your awareness and behavior is the human layer that makes that response effective.

The Cost of Non-Compliance Is Not Hypothetical2.5

The DoJ has now settled nine cybersecurity False Claims Act cases under its Civil Cyber-Fraud Initiative since 2021. The two largest settlements reached $11 million each. The enforcement campaign is accelerating, not slowing down.

$4.6MMORSECORP settlement — Army & Air Force contracts
$11MLargest cybersecurity FCA settlements to date (two cases)
$421KIllinois machining subcontractor — technical drawings — Dec 2025

These are not large prime contractors with complex fraud schemes. They are organizations — some of them small manufacturers — that signed SPRS compliance affirmations they could not support. The mechanism that exposed them in every case was the same: an insider who knew the stated controls were not actually in place and had financial incentive to report it.

The Compliance Obligation Is Personal: Under the False Claims Act, the senior company official who signs the annual SPRS affirmation certifying cybersecurity compliance — typically the CEO or COO — assumes personal liability for the accuracy of that certification. "Reckless disregard" for its accuracy is sufficient for FCA liability. This is not an abstract legal theory. It is the mechanism that produced a $4.6 million payment in March 2025.

MODULE 02 QUIZ — 5 Questions — Minimum 80% to Pass
Question 1 of 5
What specific action by MORSECORP triggered the False Claims Act settlement?
AThey intentionally sold defense secrets to a foreign government
BThey submitted accurate compliance scores but then failed to maintain security controls
CThey submitted inaccurate compliance scores and failed to update them after a third-party assessment revealed failures
DThey refused to purchase cybersecurity software required by their contracts
Question 2 of 5
How did Volt Typhoon avoid detection while operating inside US infrastructure networks for years?
ABy operating only during business hours to blend in with normal traffic
BBy using living-off-the-land techniques and systematically deleting audit logs after each session
CBy deploying advanced malware that neutralized security monitoring software
DBy bribing network administrators at the targeted organizations
Question 3 of 5
In the Stryker Corporation attack, approximately how many devices were remotely wiped?
AApproximately 5,000 devices in the US only
BApproximately 200,000 devices across 79 countries
CApproximately 50,000 devices in North America
DApproximately 1,200 devices at Stryker's headquarters
Question 4 of 5
What percentage of the MORSECORP FCA settlement did the whistleblower receive?
A5% — approximately $230,000
B50% — approximately $2.3 million
C18.5% — approximately $851,000
D1% — approximately $46,000
Question 5 of 5
Who bears personal legal liability if a defense contractor's annual SPRS cybersecurity affirmation is inaccurate?
AOnly the IT administrator who manages the company's cybersecurity systems
BThe senior company official (CEO/COO) who signs the affirmation
CThe DoD contracting officer who awarded the contract
DNo individual bears personal liability — only the company as an entity
Module 03 of 14

Phishing, Spearphishingi, and APT5i's Playbook

The single most common initial access technique used against defense contractors is not a sophisticated technical exploit. It is an email. This module covers exactly how phishing and spearphishingi attacks work, what APT5 — China's primary telecom and defense targeting unit — does specifically, and how you recognize and respond to an attack.

Phishing vs Spearphishing — The Critical Difference3.1

Phishing

Generic phishing is mass email — fake bank alerts, package delivery notifications, password reset requests sent to millions of addresses simultaneously. Most people recognize these because they are generic. They are not addressed to you personally. They have no context about your job, your company, or your interests.

Spearphishing

Spearphishing is targeted. The attacker has researched you specifically — your name, your job title, your employer, your professional interests, possibly your personal interests from social media. The email they send you is crafted to be plausible and personally relevant. It references your actual industry, a conference you might attend, a publication you might read, a colleague you might know.

The Detection Problem: Spearphishing emails are specifically designed to pass the "does this look suspicious?" test that most security awareness training is built around. If you apply normal phishing detection techniques — checking for generic language, misspellings, weird sender addresses — you may still click a well-crafted spearphishing email. The defense is understanding the pattern of targeting, not just the visual cues.

APT5 — China's Defense Contractor Targeting Campaign3.2

APT5 is a Chinese nation-state threat actori assessed to operate on behalf of Chinese state intelligence. In 2024 and 2025, APT5 conducted sustained spearphishing campaigns specifically targeting current and former employees of US aerospace and defense contractors.

APT5's Specific Approach — What the Emails Look Like

APT5 does not send generic phishing. They send targeted emails to employees' personal email accounts — not their work accounts — because personal accounts bypass corporate security controls entirely. The emails are crafted around the target's professional profile: fake invitations to defense industry conferences that are real events the target might actually attend, fabricated notifications about training courses relevant to the target's role, fake job opportunities from legitimate defense companies, and simulated vendor communications from known industry partners. The emails are grammatically correct, professionally formatted, and contextually plausible. The link or attachment they contain delivers credential-stealing malware.

Why Personal Email? Your work email goes through corporate spam filtering, advanced threat protection, URL scanning, and security monitoring. Your personal Gmail or Outlook does not. APT5 targets personal accounts precisely because the corporate security controls your employer invested in do not protect them.

Recognizing a Spearphishing Attempt — Red Flags3.3

The following patterns are consistent across documented spearphishing campaigns targeting defense contractors. None of them alone is definitive — but any combination should trigger verification before you click anything.

  • Unsolicited conference invitations sent to your personal email, especially for events you have not registered for, even if the event is real
  • Training or certification opportunities that arrive unexpectedly and require you to log in or download something to register
  • Urgent requests from apparent colleagues, managers, or vendors asking you to act quickly — urgency is a manipulation technique designed to bypass critical thinking
  • Links that require login before showing you content — especially if the login page asks for both a username/password and a multi-factor authenticationi code
  • Attachments from unknown senders, even if the sender name appears to be someone you know — sender names are trivially easy to spoof
  • Emails about your employer's contracts or programs sent to your personal email from an external party you have not previously corresponded with

The Verification Rule: If an email asks you to click a link or open an attachment and you did not specifically request what it is offering, verify through a separate channel before acting. Call the conference organizer. Text the colleague. Visit the website directly — do not click the link in the email. The two minutes it takes to verify is the difference between a security incident and a near-miss report.

What Happens After You Click — The Attack Chain3.4

Understanding what happens after a successful phishing click helps explain why these attacks are so dangerous and why catching them early is so critical.

  • Step 1 — Credential harvest: The fake login page captures your username and password. The attacker now has your credentials — for your personal email account, and if you reused that password, potentially for your work accounts too.
  • Step 2 — MFA bypass (AiTMi): Sophisticated attacks use adversary-in-the-middle techniques to capture your multi-factor authentication code in real time, bypassing even MFA-protected accounts.
  • Step 3 — Lateral movementi: With access to your email, the attacker searches for useful information — your employer, your colleagues, your work accounts, any defense-related content. They use your account to target your contacts with trusted-sender spearphishing.
  • Step 4 — Escalation: If your credentials work on any corporate system — VPNi, work email, project portals — the attacker escalates to your employer's network. One personal account compromise can become a corporate network intrusion.

The Chain Starts With You: Every step after step one depends on step one succeeding. The entire attack chain collapses if you do not click the initial link. You — not the firewall, not the endpoint protection, not the SIEMi — are the first and most effective defense at step one.

How to Report a Suspected Phishing Attempt3.5

Reporting suspected phishing is not an overreaction. It is a required security behavior under NIST 800-171 Control 3.2.3 — awareness of and reporting of indicators of potential insider threat and attack. Every report you make, even if the email turns out to be legitimate, gives your security team information that helps protect the organization.

  • Do not click any links or open any attachments in a suspected phishing email
  • Do not forward the suspicious email to colleagues to ask "does this look weird to you?" — this spreads the threat
  • Report immediately using your company's designated reporting channel — typically a "Report Phishing" button in your email client or a direct email to your security team
  • If you already clicked — report it immediately. Do not wait. Do not hope it was harmless. Reporting quickly gives your security team the best chance to contain any damage. You will not be punished for reporting a mistake.
  • For personal email — report suspected spearphishing to your company's security team even when it arrives in your personal account, if it references your employer, your role, or defense programs

If You Already Clicked: Contact your security team immediately. The sooner you report, the more options they have. A reported click that is caught quickly is a containable incident. An unreported click that is discovered weeks later is a breach investigation. You are protected from retaliation for reporting in good faith — report immediately.

MODULE 03 QUIZ — 5 Questions — Minimum 80% to Pass
Question 1 of 5
What is the key difference between generic phishing and spearphishing?
APhishing uses email; spearphishing uses text messages
BSpearphishing is targeted and personalized to the specific recipient; phishing is mass and generic
CPhishing is illegal; spearphishing is legal under international law
DSpearphishing always contains a virus attachment; phishing contains only links
Question 2 of 5
Why does APT5 target employees' personal email accounts rather than work email accounts?
APersonal email accounts contain more valuable defense information than work accounts
BPersonal email accounts bypass corporate security controls like spam filtering and threat protection
CWork email accounts are not accessible from outside the corporate network
DTargeting personal email is required by Chinese law for state-sponsored hackers
Question 3 of 5
You receive an email at your personal Gmail account inviting you to register for a real defense industry conference you have heard of. The email asks you to click a link to register and log in. What should you do?
AClick the link — the conference is real so the email must be legitimate
BForward it to a colleague to ask if they received it too
CNavigate directly to the conference website to register — do not click the link in the email
DReply to the email asking for proof that it is legitimate
Question 4 of 5
You clicked a link in a suspicious email before realizing it might be phishing. What is the correct next action?
AWait and see if anything bad happens before reporting — it was probably harmless
BChange your password immediately and do not tell anyone to avoid embarrassment
CReport it immediately to your security team — early reporting maximizes containment options
DDelete your browser history and the email so there is no evidence of the mistake
Question 5 of 5
In a successful spearphishing attack against a defense employee, which step comes immediately after credential capture?
AThe attacker immediately publishes the stolen credentials publicly online
BThe attacker uses the compromised account to search for useful information and target the victim's contacts
CThe attacker sends a ransom demand to the victim's employer
DThe attacker deletes the victim's email account to cover their tracks
Module 04 of 14

Password & Credential Security

The F-35 program was compromised because a subcontractor never changed the default password "admin." The Stryker attack succeeded because a single administrator credential lacked multi-factor authentication. Credential security is not a technical topic — it is a daily behavior. This module covers what you need to do, and why each practice directly prevents a documented attack.

Why Credentials Are the Primary Target4.1

A valid username and password combination is the most valuable asset an attacker can obtain. It does not trigger malwarei detection. It does not set off intrusion alerts. It looks like a normal user doing normal work. Once an attacker has valid credentials for a system you have access to, they are effectively you — with all the access and permissions your account carries.

Nation-state actorsi do not typically need to break down the door. They prefer to walk in through the front with a key. Credential theft is the primary way they obtain that key — through phishing, through password reuse across breached sites, through brute force attacks on weak passwords, and through default credentials that were never changed.

The Default Password Problem: The F-35 subcontractor breach that exposed 30 gigabytes of design data was enabled by administrator credentials that had never been changed from the manufacturer default: "admin" and "guest." This is not rare. Default credentials are the first thing an attacker tries. If they work, the attack is over in minutes. If you have any system in your environment whose credentials have never been changed from the factory default — that is an open door.

What Makes a Password Strong — and Why Length Matters Most4.2

The most important factor in password strength is length. A 16-character passphrasei made of random common words is stronger than an 8-character complex password with symbols. Modern password cracking tools work by trying billions of combinations per second — length exponentially increases the number of combinations they must try.

  • Use passphrases — three or four random unrelated words strung together ("correct-horse-battery-staple") are both memorable and strong
  • Minimum 12 characters for any account — NIST guidance now prioritizes length over complexity requirements
  • Never use personal information — names, birthdates, pet names, employer names, or anything findable on social media are the first things an attacker tries
  • Never use dictionary words alone — "password", "security", "welcome" are cracked instantly
  • Never use keyboard patterns — "qwerty", "123456", "asdfgh" are in every attacker's first attempt list

The NIST Shift: The old advice — "use a complex password with numbers, symbols, and mixed case and change it every 90 days" — has been officially retired by NIST. Frequent mandatory rotation leads to predictable patterns (Password1! → Password2! → Password3!). Current NIST guidance prioritizes length, uniqueness, and changing only when compromise is suspected.

Password Reuse — The Breach Multiplier4.3

Every major website or service that has ever been breached — and thousands have — has had its user credentials stolen and eventually published online. Attackers maintain enormous databases of these credentials and run them systematically against every other service they can find. This technique is called credential stuffingi.

If you use the same password for your personal email, your work email, and your Amazon account, and Amazon is breached, the attacker now has your work email password. They did not hack your employer's systems. They recycled a credential you handed them through a breach at a completely different company.

The Credential Stuffing Scale

In 2024, the RockYou2024 database was published online containing nearly 10 billion unique username/password combinations compiled from thousands of data breaches over the past decade. Attackers with access to this database can check whether any credential you have ever used at any breached service matches a credential you are currently using somewhere else. If you have reused any password from the past decade, that credential is likely already in this database.

The Solution: Every account should have a unique password. This is only manageable with a password manager — a tool that generates and stores unique passwords for every account. Password managers approved for use with CUI systems will be specified by your security team. For personal accounts, any reputable password manager (1Password, Bitwarden, Apple Keychain) is better than password reuse.

Multi-Factor Authentication — Why It Matters and What It Stops4.4

Multi-factor authentication (MFA) requires a second proof of identity beyond your password — typically a code from an app on your phone, a hardware tokeni, or a biometric. CMMC Level 2 requires MFA for all accounts accessing CUI systems and for all privileged accountsi.

MFA is the single most effective control against credential theft. Even if an attacker obtains your password, they cannot access your account without the second factor. The Stryker attack succeeded in part because the compromised administrator account did not have MFA enabled. One credential. 200,000 devices wiped.

  • Authenticator appsi (Microsoft Authenticator, Google Authenticator) are more secure than SMS codes — SMS can be intercepted through SIM swappingi attacks
  • Hardware tokens (YubiKey) are the most secure option for high-privilege accounts
  • Never share MFA codes — a legitimate service will never call you and ask for your MFA code. If someone asks you for your code, it is an attack.
  • Approve MFA prompts only when you initiated them — if you receive an MFA approval request you did not trigger, deny it immediately and report it

MFA Fatigue Attacks: Attackers who have your password sometimes flood your phone with MFA approval requests hoping you will approve one out of frustration or confusion. This is called an MFA fatigue attack. If you receive repeated unexpected MFA requests, do not approve any of them. Contact your security team immediately — someone has your password.

Your Credential Hygiene Checklist4.5

These are not suggestions. They are CMMC-required behaviors for anyone whose work involves CUI or CUI-adjacent systems. Your organization's security team will verify these practices as part of CMMC compliance.

  • Change all default passwords immediately — any device or system you are responsible for that still has a manufacturer default credential is a critical vulnerability. Change it today.
  • Use unique passwords for every account — no reuse between work accounts, personal accounts, or any other service
  • Enable MFA on every account that supports it — work accounts, personal email, social media, banking, everything
  • Use an approved password manager — do not store passwords in a spreadsheet, a sticky note, or your browser's built-in password manager on an unmanaged personal device
  • Never share passwords — not with colleagues, not with IT support staff who call you, not even with your manager. Legitimate IT staff use administrative access tools — they do not need your password.
  • Report unexpected MFA prompts immediately — an MFA request you did not initiate means someone has your password right now
  • Lock your screen when you step away — an unlocked workstation in a shared environment is an open access point

The 30-Second Test: Right now, without looking anything up — can you name one account where you use the same password you use somewhere else? Can you name one device in your work environment whose default credentials might never have been changed? If yes to either, those are your action items before your next work session.

MODULE 04 QUIZ — 5 Questions — Minimum 80% to Pass
Question 1 of 5
In the Stryker Corporation wiper attack, a single compromised administrator account was used to remotely wipe 200,000 devices. Which missing credential control most directly enabled this?
AThe administrator did not use a long enough password
BThe privileged administrator account lacked multi-factor authentication
CThe account was shared between multiple administrators
DThe password was stored in an unencrypted spreadsheet
Question 2 of 5
According to current NIST guidance, what is the most important factor in password strength?
AMandatory rotation every 90 days with increasing complexity requirements
BUsing at least three special characters and one capital letter
CLength — a longer unique passphrase is stronger than a short complex password
DUsing your employer's name combined with the current year
Question 3 of 5
You receive an MFA approval request on your phone for a work system, but you did not just try to log in anywhere. What does this indicate and what should you do?
AIt is probably a system glitch — approve it and move on
BSomeone may have your password — deny the request and immediately report it to your security team
CIgnore it — unsolicited MFA requests expire on their own after 30 seconds
DApprove it to confirm your account is still active
Question 4 of 5
What is credential stuffing and why does password reuse make it dangerous?
AAn attack where someone physically steals written-down passwords from desks
BUsing credentials stolen from one breached service to attempt access to other services — dangerous because reused passwords give instant access
CA technique for creating complex passwords by combining multiple simple ones
DFlooding a login page with automated attempts until the account is locked
Question 5 of 5
Someone calls you claiming to be from your company's IT helpdesk and asks for your password to resolve a technical issue with your account. What should you do?
AProvide the password — IT staff have a legitimate need for it to fix your account
BProvide only the first half of your password to verify identity
CRefuse — legitimate IT staff use administrative tools and never need your password — report the call to your security team
DChange your password to something temporary, share it, then change it back after the call
Module 05 of 14

CUI Handling — What You Can and Cannot Do

Knowing what CUI is matters less than knowing how to handle it correctly every day. This module covers the specific behaviors required for storing, transmitting, printing, and disposing of CUI — and the common mistakes that turn a routine workday into a security incident.

Storing CUI — Where It Can and Cannot Live5.1

CUI must be stored only in systems that have been approved by your organization for CUI handling. This is not a suggestion — it is a CMMC control requirement. Storing CUI in an unapproved location, even temporarily, creates a compliance gap that could affect your organization's certification.

  • Approved: Systems within your organization's defined CUI enclavei — network drives, approved cloud storage, designated workstations that are part of your CMMC assessment scope
  • Not approved: Personal cloud storage (personal Google Drive, Dropbox, iCloud), personal devices, USB drives unless specifically authorized, shared drives outside your enclave
  • Never: Personal email accounts, consumer messaging apps (WhatsApp, iMessage, Facebook Messenger), social media platforms, any system outside your organization's control

The Personal Device Problem: Emailing a CUI document to yourself at home to work on it over the weekend — even with good intentions — moves that document outside your organization's protected environment. Your home computer is not in your CMMC scope. Once CUI leaves your approved environment, you cannot control what happens to it.

Transmitting CUI — Email, File Sharing, and External Parties5.2

Every time CUI moves from one location to another — from your system to a colleague's, from your organization to a supplier, from your server to a cloud platform — that transmission must be protected. NIST 800-171 Control 3.13 requires encryption in transit for all CUI.

  • Email: Standard corporate email is generally acceptable for CUI if your organization's email system meets the required security baseline (FedRAMPi Moderate or equivalent). Personal Gmail, Yahoo, or Outlook.com are never acceptable for CUI.
  • File sharing: Use only approved platforms. If you need to share a CUI file with an external party — a supplier, a subcontractor — use the approved method specified by your security team, not a consumer file-sharing service.
  • Before sending, ask: Does the recipient have authorization to receive this CUI? Is the system I am sending through approved? Would I be comfortable if my security officer could see exactly what I am sending and how?

The "It's Easier" Trap: The most common CUI transmission mistakes happen because the approved method feels slower or more complicated than using a personal account or a consumer tool. Convenience is the enemy of compliance. If the approved method is genuinely unusable, report that to your supervisor — do not work around it.

Printing, Physical Documents, and Clean Desk Policy5.3

CUI does not only exist digitally. Printed technical drawings, physical contract documents, and handwritten notes that contain CUI are subject to the same handling requirements as digital files — and are often easier to lose.

  • Do not leave printed CUI unattended — at your desk, at the printer, in a conference room, or in a vehicle. A document left on a printer for 10 minutes has been seen by everyone who walked past it.
  • Lock physical CUI away when you step away from your workspace — in a locked drawer, a locked cabinet, or a secured room. The clean desk policy is a CMMC-required behavior, not a preference.
  • Do not take CUI home in physical form without explicit authorization. A technical drawing in your briefcase is subject to the same requirements as the digital file it came from.
  • Dispose of physical CUI properly — shredding in a cross-cut or micro-cut shredder, or in a designated secure destruction bin. Standard recycling bins and trash cans are not acceptable for documents containing CUI.

The Conference Room Risk: One of the most common physical CUI exposures is documents left behind in conference rooms after meetings — especially when the meeting room is shared with visitors, vendors, or personnel from other organizations. Before leaving any meeting, verify that all printed CUI is accounted for and secured.

Access Control — Who Can See What5.4

NIST 800-171 Control 3.1 — Access Control — requires that CUI is accessible only to people who have a legitimate need to access it for their work. This is called the "need to know" principle. The fact that a colleague works for your organization does not mean they have authorization to access all CUI your organization handles.

  • Do not share CUI access credentials — if a colleague needs access to a CUI system or document, they need to obtain that access through the proper authorization process, not through your login
  • Do not forward CUI emails to people who are not already authorized recipients — check with your supervisor if you are unsure whether a recipient has authorization
  • Do not leave screens visible to unauthorized individuals — position your monitor so that CUI is not visible to visitors, maintenance personnel, or colleagues who do not have need-to-know
  • Report unauthorized access attempts — if someone without apparent authorization asks to see CUI or attempts to access CUI systems, report it immediately

Visitors and Vendors: Maintenance personnel, delivery staff, visitors, and vendor representatives who enter your workspace may not have authorization to see CUI. Secure or cover CUI — physical and digital — before any unauthorized person enters a workspace where CUI is visible.

When You Are Unsure — The Three Questions5.5

CUI handling decisions happen quickly in the course of a workday. When you need to make a fast decision about whether something you are about to do with a document or file is acceptable, these three questions provide a reliable filter:

Question 1: Is this information something the government created or uses for a defense contract? If yes, assume it may be CUI.

Question 2: Am I putting this information somewhere my organization's security team has not specifically approved for CUI? If yes, stop.

Question 3: Would I be comfortable if my security officer could see exactly what I am doing with this information right now? If no, stop.

If any answer gives you pause, the correct action is always to stop and ask — not to proceed and hope for the best. Your security officer and your supervisor are resources, not obstacles. The two-minute conversation that prevents a compliance incident is always better than the investigation that follows one.

No Penalty for Asking: You will never be penalized for asking a question about whether something is appropriate. You may be penalized for proceeding with an action that was inappropriate and not asking. When in doubt, ask first.

MODULE 05 QUIZ — 5 Questions — Minimum 80% to Pass
Question 1 of 5
You need to finish a CUI document at home tonight. What is the correct approach?
AEmail the document to your personal Gmail so you can access it from home
BSave it to a personal USB drive to bring home
CUse only an approved remote access method to your organization's CUI-approved environment
DUpload it to your personal Google Drive marked as private
Question 2 of 5
You print a CUI technical drawing for a meeting and the meeting ends early. What do you do with the printout?
ALeave it in the conference room — it will be cleaned up by the office staff
BPlace it in the standard recycling bin to be disposed of
CSecure it — lock it in your desk or shred it using the approved cross-cut shredder
DKeep it in your pocket until you get home then throw it away there
Question 3 of 5
A colleague who works in a different department asks you to forward a CUI design file because they "just need a quick look." What do you do?
AForward it — they work for the same company so they are authorized
BVerify they have documented authorization to access that specific CUI before forwarding
CPrint it and hand it to them in person to avoid creating an email record
DShare your screen with them in a video call so the file does not move
Question 4 of 5
An HVAC technician arrives to service the equipment in your workspace. There is a CUI technical drawing open on your screen. What should you do?
ANothing — the technician is a vetted vendor so they are allowed to see it
BLock your screen or close the document before the technician enters the workspace
CAsk the technician to sign a non-disclosure agreement before they enter
DTurn the monitor away — as long as they cannot read it clearly it is acceptable
Question 5 of 5
Which of the following is an approved method for disposing of a printed CUI document?
AStandard office recycling bin
BTearing by hand into quarters before discarding
CCross-cut or micro-cut shredding, or a designated secure destruction bin
DTrash can in a locked office
Module 06 of 14

Insider Threat — Recognizing and Reporting

Not every threat to defense information comes from an outside attacker. Insiders — current or former employees, contractors, and business partners with authorized access — are responsible for a significant portion of defense data breaches. This module covers the indicators, the types, and the required reporting behaviors.

What Is an Insider Threat?6.1

An insider threat is a person who has — or had — authorized access to your organization's systems, facilities, or information, and who uses that access in a way that harms the organization, a government program, or national security. Insider threatsi are not always malicious. Many are accidental. Some are coerced.

  • Malicious insider: Deliberately steals, leaks, or sabotages information for financial gain, ideological motivation, personal grievance, or on behalf of a foreign power
  • Negligent insider: Causes harm through careless or reckless behavior — leaving CUI in unsecured locations, bypassing security controls for convenience, ignoring policies they consider unnecessary
  • Coerced insider: Compelled to act against the organization by external pressure — blackmail, threats to family, financial desperation exploited by a foreign intelligence service

Why Insider Threats Are Dangerous: An outside attacker must find a way in. An insider is already in. They have legitimate access credentials, they know the organization's systems and procedures, they understand what information is valuable, and their behavior does not immediately trigger anomaly alerts. Insider threat detection requires human observation — which means it depends on you.

Behavioral Indicators — What to Watch For6.2

NIST 800-171 Control 3.2.3 requires all employees to be trained to recognize and report potential indicators of insider threat. The following behaviors, especially in combination, are documented indicators that warrant attention and reporting:

  • Unusual data access patterns — accessing files, systems, or databases outside their normal work scope, especially late at night or on weekends
  • Large-volume data downloads or exports — copying unusually large quantities of files to external drives, cloud storage, or personal email
  • Unexplained financial changes — sudden unexplained wealth, new expensive purchases, paying off significant debt without an obvious explanation
  • Contact with foreign nationals or foreign government representatives in contexts that seem out of place for their role
  • Expressing strong grievances about the organization, supervisors, or US government policy combined with access to sensitive information
  • Attempting to access areas or information beyond their authorization — repeatedly asking for access to systems or documents outside their job scope
  • Working unusual hours specifically to avoid supervision, combined with other indicators

No Single Indicator Is Definitive: None of these behaviors alone proves insider threat activity. People access files at odd hours for legitimate reasons. People pay off debt with inheritances. Your job is not to investigate or accuse — it is to report observations to your security officer and let trained personnel evaluate them.

The MORSECORP Whistleblower — An Insider Who Reported6.3

The MORSECORP False Claims Act case — the $4.6 million settlement you studied in Module 2 — was brought to the government's attention by an insider. A person who worked at MORSECORP, observed that the company's stated cybersecurity compliance was not accurate, and filed a qui tami complaint under the False Claims Act.

That person received $851,000 — 18.5% of the government's recovery. They were protected from retaliation under FCA whistleblower provisions. They did not need to prove malicious intent by the company. They reported what they observed, the government investigated, and the evidence supported the complaint.

The Reporting Obligation Is Yours

NIST 800-171 Control 3.2.3 is not optional. It requires that you be trained to recognize and report indicators of insider threat. That training is this module. The reporting obligation it creates is real. If you observe behavior that meets the indicators covered in this module — particularly behavior that suggests unauthorized access to or exfiltration of CUI — you have a documented obligation to report it through your organization's security reporting channel. You are not expected to investigate. You are expected to report.

Social Engineering and Foreign Recruitment6.4

Foreign intelligence services do not only use technical means to steal defense information. They also recruit insiders — defense contractor employees who are willing, or who are maneuvered into a position where they feel they have no choice but to cooperate.

Common recruitment approaches include:

  • Professional flattery: Approaching defense employees at conferences, via LinkedIn, or through professional networks with offers of consulting work, speaking opportunities, or business partnerships that create a relationship and a financial connection
  • Ideological alignment: Identifying employees who express sympathy with a foreign government or frustration with US policy and cultivating those views over time
  • Honey traps and personal vulnerabilities: Exploiting financial desperation, romantic relationships, gambling debts, or other personal vulnerabilities to create leverage
  • Gradual escalation: Starting with requests that seem harmless — "just tell me who works on that program" — and progressively escalating to requests for actual CUI

If You Are Approached: If anyone — at a conference, on LinkedIn, through email, or in person — attempts to elicit information about your employer's defense programs, personnel, or technical work in a way that feels unusual, report it to your security officer immediately. You will not be in trouble for reporting an approach. You may be in trouble for not reporting one.

How to Report — Channels, Protection, and What Happens Next6.5

Knowing that you should report something and knowing how to report it are different things. Your organization's security officer will specify the exact reporting channels during your onboarding — typically a designated email address, a security hotline, or a direct conversation with the Facility Security Officer (FSOi).

  • Report through your designated channel — do not post observations on company messaging platforms or discuss them with other colleagues before reporting officially
  • Report facts, not conclusions — describe what you observed, when, and where. Do not attempt to diagnose whether it constitutes insider threat activity. That evaluation is your security officer's job.
  • You are protected from retaliation for good-faith security reports. Federal law and most state employment laws prohibit retaliation against employees who report security concerns in good faith.
  • Anonymous reporting may be available through your organization — ask your security officer whether an anonymous channel exists if that is a concern

The Cost of Not Reporting: Every documented insider threat case includes colleagues who noticed something and said nothing. The reasons are understandable — not wanting to get someone in trouble, uncertainty about whether it was really a problem, concern about workplace relationships. But the cost of a CUI breach — to your organization, to the defense program, to national security — is vastly greater than the discomfort of making a report that turns out to be unfounded.

MODULE 06 QUIZ — 5 Questions — Minimum 80% to Pass
Question 1 of 5
Which of the following describes a "negligent insider threat"?
AAn employee who deliberately sells CUI to a foreign intelligence service
BAn employee who causes harm through careless behavior — bypassing security controls for convenience
CAn employee who is blackmailed into sharing information by an external party
DA contractor who loses their badge and fails to report it
Question 2 of 5
You notice a colleague consistently accessing large numbers of files outside their normal work area late at night, and recently bought an expensive new car despite discussing financial stress last month. What should you do?
AConfront the colleague directly and ask them to explain their behavior
BSay nothing — it is probably a coincidence and you do not want to damage the relationship
CReport the observations to your security officer — describe what you observed without drawing conclusions
DPost about it in the team messaging channel to get others' opinions
Question 3 of 5
At a defense industry conference, a well-dressed professional approaches you, compliments your work, and offers a paid consulting arrangement that would involve discussing your employer's programs. What should you do?
AAccept — consulting work is common in the industry and the pay is attractive
BDecline politely and report the approach to your security officer when you return
CAccept the business card but do not follow up — no need to report since you declined
DAsk them to submit a formal proposal to your HR department
Question 4 of 5
When reporting a potential insider threat indicator, what information should you provide?
AYour conclusions about what the person is doing and why
BOnly report if you have definitive proof of wrongdoing
CFactual observations — what you saw, when, and where — without drawing conclusions
DThe names of other colleagues who might have also noticed something
Question 5 of 5
Which NIST 800-171 control specifically requires employees to be trained on recognizing and reporting insider threat indicators?
AControl 3.1 — Access Control
BControl 3.5 — Identification and Authentication
CControl 3.2.3 — Awareness and Training: Insider Threat
DControl 3.13 — System and Communications Protection
Module 07 of 14

Executive Exposure — The SPRS Affirmation & Personal Legal Risk

This module is required for all employees but is especially critical for senior officials, managers, and anyone who participates in or supports the company's CMMC compliance process. The False Claims Act creates personal liability that does not end when you leave the building.

What Is the SPRS Affirmation and Who Signs It?7.1

The Supplier Performance Risk System (SPRS) is the DoD database where defense contractors post their cybersecurity compliance scores and affirmations. Under CMMC, every contractor must post an annual affirmation in SPRS signed by a designated senior company official — typically the CEO, COO, or an equivalent executive — certifying that the organization has implemented and is maintaining all required cybersecurity controls at the specified CMMC level.

This is not a checkbox on a procurement form. It is a legal certification. Federal law — specifically the False Claims Act (31 U.S.C. § 3729) — applies to anyone who knowingly submits or causes the submission of a false or fraudulent claim to the government. A SPRS affirmation certifying CMMC compliance that the signatory knows to be inaccurate is a false claim.

What "Knowingly" Means Under the FCA: The FCA does not require proof of deliberate fraud. "Knowingly" includes acting with reckless disregard for the truth. A senior official who signs the SPRS affirmation without verifying the organization's actual compliance posture may be found to have acted with reckless disregard — even if they genuinely believed the company was compliant. Ignorance is not a complete defense when the official had access to information that should have prompted inquiry.

The MORSECORP Lesson for Every Executive7.2

The MORSECORP case established a precedent that every defense contractor executive must understand: the government will pursue FCA cases against organizations that submit inaccurate SPRS scores, and the mechanism that triggers those cases is increasingly an insider — an employee who knows what the actual compliance posture is and has financial incentive to report the discrepancy.

The MORSECORP Sequence — What Every Executive Should Memorize

1. MORSECORP submitted self-assessment scores to SPRS claiming cybersecurity compliance.
2. A third-party assessment produced a substantially lower, failing score.
3. MORSECORP did not update the SPRS record to reflect the accurate score.
4. The company continued billing the Army and Air Force under contracts requiring compliance.
5. An insider filed a qui tam FCA complaint.
6. The government intervened and settled for $4.6 million plus interest.
7. The whistleblower received $851,000.

The key failure was not the original inaccurate score — it was knowing the score was wrong and not correcting it. That knowing inaccuracy turned a compliance gap into fraud.

How Any Employee Can Become a Whistleblower7.3

The False Claims Act's qui tam provision allows any private citizen who has original knowledge of fraud against the government to file a sealed complaint on behalf of the government. If the case is successful, the whistleblower receives between 15% and 30% of whatever the government recovers. There is no minimum. There is no requirement that the whistleblower be harmed by the fraud.

What this means in a CMMC context: any employee who observes that their organization is certifying cybersecurity controls it is not actually implementing — a network administrator who knows the SIEMi is not configured, an IT manager who knows MFA is not deployed on privileged accountsi, a quality manager who knows the policies are not being followed — is a potential qui tam relator with significant financial incentive.

The Whistleblower Economy: At $4.6 million settlement, a 15% qui tam share is $690,000. At the $11 million settlements that represent the current ceiling, a 15% share is $1.65 million. These are life-changing sums. They represent enormous incentive for any employee who observes cybersecurity non-compliance at a defense contractor to consult an FCA attorney — which is exactly what happened at MORSECORP and at the Illinois machining shop in December 2025.

What Protects the Organization — and What Doesn't7.4

The most effective protection against FCA liability is genuine compliance — actually implementing the controls your SPRS affirmation certifies. But organizational circumstances change. A cloud migration changes the scope of your CUI environment. A new tool deployment may inadvertently create a gap. A network change may take a certified system out of compliance. The second most important protection is a process for detecting and documenting those gaps promptly.

  • Documented gap assessments — regular reviews that identify current compliance status and gaps are evidence of good-faith effort
  • Plans of Action and Milestones (POA&Mi) — documented plans to remediate identified gaps with specific timelines are recognized under CMMC and demonstrate that known gaps are being actively addressed
  • Prompt SPRS score corrections — if a gap assessment or third-party review reveals that your current SPRS score overstates compliance, correct it immediately. The MORSECORP case was triggered not by the original gap, but by the failure to update a score the company knew was wrong.
  • What does not protect you: Not knowing. Not asking. Delegating the SPRS signature to someone who has not verified its accuracy. Assuming the IT department has it handled.

For Every Signing Official: Before you sign the next SPRS affirmation, ask your security team or CMMC compliance lead to confirm in writing the current compliance status of every control at the level you are certifying. That confirmation — and their response — becomes documentation that you acted with due diligence, not reckless disregard.

The Supplier Flowdown Obligation7.5

Prime contractors bear an additional layer of legal and contractual obligation under DFARSi 252.204-7021: they must ensure that their subcontractors meet CMMC requirements appropriate to the CUI being flowed down to them. This is not aspirational guidance — it is a binding contractual condition. A prime that awards work to a non-compliant subcontractor is in breach of its own CMMC obligations.

This creates a specific risk for supply chain managers and procurement officials: every subcontract award for work involving CUI requires verification of the subcontractor's CMMC status before award. A purchase order to a subcontractor whose SPRS score is inaccurate or missing may expose the prime's own CMMC certification to challenge.

The 50/50 Opportunity: Rather than viewing subcontractor compliance as a policing problem, forward-thinking prime contractors are investing directly in their critical suppliers' CMMC readiness — co-funding compliance programs on a 50/50 cost-sharing basis. The investment is modest ($37,500–$65,000 per supplier at 50% share). The alternative — losing a specialized supplier to compliance attrition or inheriting FCA exposure through a non-compliant subcontract — is far more costly.

MODULE 07 QUIZ — 5 Questions — Minimum 80% to Pass
Question 1 of 5
Under the False Claims Act, what level of intent is required to establish liability for an inaccurate SPRS affirmation?
ADeliberate, premeditated fraud with documented intent to deceive the government
BKnowing conduct — including reckless disregard for the truth of the certification
CCriminal intent proven beyond a reasonable doubt in federal court
DNo intent required — the FCA is a strict liability statute for government contractors
Question 2 of 5
In the MORSECORP case, what specific action transformed a compliance gap into FCA fraud liability?
AHaving cybersecurity gaps in the first place
BFailing to hire a C3PAO before the mandatory deadline
CKnowing the SPRS score was inaccurate after a third-party assessment and not correcting it
DUsing a cloud email provider without prior government approval
Question 3 of 5
A CEO signs the annual SPRS compliance affirmation without verifying the accuracy of the cybersecurity controls with the IT team. If those controls turn out to be unimplemented, can the FCA apply?
ANo — the FCA only applies to deliberate, provable fraud with documented intent
BYes — the FCA covers "reckless disregard" for accuracy, which signing without verification can satisfy
CNo — executives are shielded from personal liability by corporate entity protections
DOnly if the company subsequently suffers an actual data breach
Question 4 of 5
A gap assessment reveals that your organization's SPRS score overstates compliance. What is the correct immediate action?
ARemediate the gap first, then update the score once you are fully compliant
BUpdate the SPRS score to reflect actual compliance and document the gap in a POA&M immediately
CKeep the current score until the next annual affirmation cycle to avoid disrupting contracts
DNotify the prime contractor verbally but do not update SPRS yet
Question 5 of 5
Under DFARS 252.204-7021, what obligation does a prime contractor have regarding subcontractor CMMC compliance?
APrimes are only responsible for their own compliance — subcontractors handle their own certifications independently
BPrimes must verify subcontractor CMMC status before award and ensure flowdown compliance throughout the contract
CPrimes must fund 100% of all subcontractor CMMC certification costs
DPrimes are only responsible for Tier 1 subcontractors — lower tiers manage compliance independently
Module 08 of 14

After a Breach — Reporting Requirements & Your 72-Hour Duty

A breach does not end when the attacker leaves your network. What happens in the hours and days after a breach — specifically whether you report it, how quickly, and to whom — determines the legal consequences for your organization and the ability of the government to protect other contractors from the same attack.

The 72-Hour Reporting Requirement8.1

DFARSi 252.204-7012 requires that defense contractors report cyber incidents affecting CUI or defense information systems to the DoD within 72 hours of discovery. This is not a target — it is a legal requirement. The clock starts when the incident is discovered, not when it is confirmed as a breach.

Reports must be submitted through the DoD's DIBNeti portal. They must include specific information about the incident — what systems were affected, what information may have been compromised, the date and time of discovery, and initial assessment of the attack vector.

72 Hours From Discovery — Not From Confirmation: The most common mistake organizations make is waiting until they fully understand what happened before reporting. The law does not require full understanding — it requires timely notification. Report what you know when you discover it. You can submit follow-up information as the investigation develops. What you cannot do is wait until you have a complete picture, because by then the 72-hour window has passed.

What Counts as a Reportable Cyber Incident?8.2

Under DFARS 252.204-7012, a reportable cyber incident is any action that results in — or could reasonably be expected to result in — an actual or potentially adverse effect on a covered contractor information system or CUI. This includes:

  • Unauthorized access to systems that process, store, or transmit CUI — including any confirmed credential compromise that could have provided such access
  • Suspected exfiltrationi of CUI — even if you cannot confirm data actually left your network, if it is suspected, it is reportable
  • Malware infection on systems within your CUI environment — regardless of whether the malware appears to have accessed CUI
  • Ransomware attacks affecting covered systems — ransomwarei encrypts data, which constitutes an adverse effect on the information system
  • Phishing clicks that resulted in credential compromise — an employee who clicked a phishing link and had credentials stolen has triggered a reportable incident if those credentials provided access to CUI systems

When in Doubt, Report: The DFARS reporting standard is "actual or potentially adverse effect." If you are uncertain whether something you observed constitutes a reportable incident, report it and let the DoD determine whether it meets the threshold. Failing to report a reportable incident is itself a compliance violation — and potentially FCA liability if the unreported breach was later discovered.

Internal Reporting — Your Immediate Duty When You Observe an Incident8.3

The 72-hour clock starts when the organization discovers the incident — but the organization can only discover it if employees who observe something report it internally immediately. The most dangerous gap in breach response is the time between when an employee notices something wrong and when they tell someone who can act on it.

  • Report immediately to your supervisor and your security officer when you observe any of the following: unusual system behavior, unexpected access alerts, suspicious files or activity on your workstation, any indication that your credentials may have been compromised
  • Do not attempt to investigate yourself — do not try to determine what happened before reporting. Report first, investigate with trained personnel after.
  • Do not delete, modify, or clean up anything on a potentially compromised system before your security team reviews it. Evidence of how the breach occurred is critical for the mandatory DoD report and for preventing the next breach.
  • Preserve logs and records — if you have administrative access to systems that may be involved, do not rotate logs, do not clear caches, do not make configuration changes until your security team directs you to
The Cost of Not Reporting — Legal and Operational8.4

Failing to report a cyber incident within the 72-hour window is a violation of DFARS 252.204-7012. It creates FCA liability if the failure to report was knowing. It also deprives the DoD and other defense contractors of threat intelligence that could prevent identical attacks against other organizations in the defense supply chain.

Why Timely Reporting Matters Beyond Your Organization

When a defense contractor suffers a breach, the attack vector — the specific technique used to gain entry — is often the same technique being used simultaneously against other contractors. The DoD's Defense Industrial Base Cybersecurity Assessment Center (DIBCAC) uses breach reports to issue alerts to other contractors about active campaigns. A contractor who reports a phishing campaign that stole credentials allows the government to warn hundreds of other contractors to watch for the same approach. A contractor who does not report leaves the rest of the supply chain uninformed and vulnerable. The 72-hour requirement is not just self-protective. It is part of a collective defense obligation.

The Concealment Trap: Organizations that experience a breach and choose not to report it face compounding risks. The breach may be discovered independently — by the DoD during an assessment, by a whistleblower, or through subsequent incidents. When concealment is discovered, the organization faces not only the original breach liability but added liability for the failure to report. The cover-up is always worse than the crime.

Your Role in Breach Response — A Practical Summary8.5

You do not need to be a security professional to fulfill your breach response obligations. You need to know three things: what to watch for, who to tell, and what not to do.

What to watch for: Unusual login alerts, unexpected password reset notifications, strange system behavior, files you did not create, outbound network activity at unusual hours, colleagues reporting receiving unexpected emails from your account, any indication your credentials may have been stolen.

Who to tell: Your direct supervisor and your security officer — immediately, within minutes of observation, not after you have figured out what happened. Time is the critical variable in breach containment.

What not to do: Do not delete anything. Do not modify system configurations. Do not tell colleagues before you tell your security officer. Do not post about it on any messaging platform. Do not attempt to contain or clean up the incident yourself. Report it and let trained personnel take it from there.

Completing this training module means you now know what every CMMC-compliant defense contractor employee is required to know. The knowledge is only valuable if you act on it. When you observe something — report it.

MODULE 08 QUIZ — 5 Questions — Minimum 80% to Pass
Question 1 of 5
Under DFARS 252.204-7012, within how many hours of discovery must a defense contractor report a cyber incident to the DoD?
A24 hours
B72 hours
C7 business days
D30 days once the breach is fully confirmed and documented
Question 2 of 5
You suspect your workstation may have been compromised. Before reporting to your security team, you decide to delete some suspicious-looking files to clean up the system. Is this correct?
AYes — removing suspicious files prevents further damage before the security team arrives
BYes — the security team will appreciate that you contained the incident
CNo — report immediately without modifying anything; deleting files destroys forensic evidence
DOnly delete files if you are certain they were created by the attacker
Question 3 of 5
After discovering a potential breach on your workstation, which of these actions would HARM the investigation and should never be done?
AReporting the incident to your security officer immediately
BLeaving the system powered on and untouched until the security team arrives
CDeleting suspicious files and clearing browser history to "clean up" before reporting
DPreserving any logs or records on the system without modification
Question 4 of 5
Why does timely breach reporting benefit other defense contractors beyond your own organization?
AIt reduces your organization's insurance premiums for the following year
BThe DoD uses breach reports to issue threat intelligence alerts to other contractors about active attack campaigns
COther contractors can file insurance claims based on related incidents
DIt allows the prime contractor to pause the program while the breach is investigated
Question 5 of 5
Your organization experienced a breach six months ago and chose not to report it. The DoD discovers this during a routine assessment. What is the likely consequence compared to if the breach had been reported on time?
AThe consequence is the same — a late report is treated the same as an on-time report
BThe consequence is less severe — the organization fixed the problem so no harm resulted
CThe organization faces both original breach liability and additional liability for failure to report — the concealment compounds the violation
DThere is no additional consequence since the statute of limitations has run
Module 09 of 14

Mobile Devices & Remote Work — BYODi, VPNi, and Working Away from the Office

Laptop at a coffee shop. Phone syncing work email on the train. Home office connecting to the company network over residential Wi-Fi. Remote work is now the norm — and every one of those scenarios creates a potential gap in your organization's CMMCi compliance boundary. This module covers the specific behaviors required when you work outside the office perimeter.

Why Remote Work Is a CMMC Compliance Problem9.1

Your organization's CMMC certification covers a defined scope — specific systems, networks, and locations where CUIi is processed, stored, or transmitted. That scope was assessed by a C3PAOi. It includes your office network, your company-issued devices on that network, and the cloud services your organization approved. It does not include your home router, your personal laptop, the café Wi-Fi, or your phone unless those devices and connections have been specifically brought into scope and secured to the required standard.

When you work remotely in a way that has not been accounted for in your organization's security plan, you are potentially moving CUI outside the protected boundary — even if you do not realize it.

The Invisible Boundary Problem: The office perimeter is visible — you walk through a door, badge in, sit at a desk connected to the corporate network. The remote work security boundary is invisible and behavioral. You cannot see it. You have to know where it is and stay inside it. This module tells you exactly where that boundary is and what keeps you inside it.

The VPN Requirement — What It Is and When You Need It9.2

A VPN (Virtual Private Network) creates an encrypted tunnel between your remote device and your organization's network — making your connection functionally equivalent to being physically in the office, even when you are sitting in a coffee shop. NIST SP 800-171i Control 3.1.12 requires that remote access sessions be monitored, controlled, and protected using encryption. In practice, this means a company-approved VPN must be active any time you access systems that process or store CUI from outside the office.

  • Always required: Any time you access company systems, files, email, or applications from outside the office network
  • Required before access — not after: Connect the VPN before opening any work application or file. Do not open a CUI document and then connect — that sequence matters
  • Your personal VPN service is not acceptable: Commercial consumer VPN services (NordVPN, ExpressVPN, etc.) are not your company's approved VPN and do not satisfy CMMC remote access requirements
  • If the VPN is down: Stop work on CUI until you can reconnect. Do not work around it by using personal email or cloud storage as a substitute

Split Tunneling Risk: Some VPN configurations use split tunnelingi — routing only corporate traffic through the encrypted connection while personal browsing goes directly to the internet. If your organization uses split tunneling, be aware that malwarei downloaded through the unprotected direct connection can still reach your corporate-connected session.

Public Wi-Fi — The Specific Risks You Face9.3

Public Wi-Fi networks — at coffee shops, airports, hotels, conference centers, and coworking spaces — are fundamentally untrusted environments. You do not control who else is on the network, what the network operator logs, or whether the access point you are connecting to is legitimate.

The Evil Twin Attack

At a defense industry conference, an attacker sets up a Wi-Fi hotspot named "Marriott_Guest" — identical to the legitimate hotel network. Defense contractor employees connect without checking. The attacker's hotspot performs a man-in-the-middle attack on every connection passing through it — intercepting credentials, session tokens, and any unencrypted data. This attack costs under $50 to execute and requires no technical sophistication beyond a cheap laptop and freely available software.

  • Never access CUI on public Wi-Fi without an active VPN — the VPN encrypts your traffic even if the underlying network is compromised
  • Verify network names before connecting — ask a staff member for the exact Wi-Fi name rather than selecting the most plausible option from the list
  • Prefer mobile hotspot over public Wi-Fi — your phone's cellular hotspot is a significantly more secure option for work sessions when away from the office
  • Disable auto-connect — turn off the setting that automatically connects your device to previously used or open networks
  • Use HTTPS — verify that any web-based tool or portal you access shows a valid HTTPS padlock before entering any credentials
BYOD — Personal Devices and the Compliance Boundary9.4

BYOD (Bring Your Own Device) policies vary significantly across organizations. Some ban personal devices from accessing work systems entirely. Some allow limited access with specific security controls. Some have no policy at all — which is itself a compliance risk.

Regardless of your organization's specific policy, the following rules apply to any personal device that you use to access work systems or CUI:

  • Personal devices used for work enter the compliance scope — if your personal phone syncs work email that contains CUI, that phone is technically within the CMMC assessment boundary and must meet the applicable security requirements
  • You cannot install unauthorized apps on a BYOD device used for work — any app you install could access the same data your work applications do
  • Personal device storage is not approved CUI storageCUI that downloads to your personal device's local storage (through email attachments, cached files, etc.) creates a compliance violation
  • If your personal device is lost or stolen, report it to your security officer immediately — it is a security incident, not just an inconvenience

The Safest Approach: If your organization allows it, use a company-issued device for all work involving CUI and keep personal and work activity on separate physical devices. The separation is not bureaucratic inconvenience — it is the only reliable way to ensure your personal apps, personal cloud storage, and personal browsing habits cannot create a path to your work data.

Home Office Security — Your Network Is Part of the Perimeter9.5

If you regularly work from home and access CUI systems, your home network is effectively a branch office of your employer's CMMC-assessed environment. That does not mean your home must be assessed — it means the device you use and the connection you make must meet the security standard, and your home environment must not introduce risks that would undermine it.

  • Change your home router's default password — the same default-password problem that enabled the F-35 breach exists in millions of home routers. A compromised home router can intercept your VPN credentials
  • Keep your home router firmware updated — router vulnerabilities are regularly discovered and patched. An unpatched router is an entry point
  • Separate work and personal devices on your home network — use a guest network for personal devices, smart TVs, gaming consoles, and IoT devicesi like smart speakers. These devices have poor security track records and should not share a network segment with your work machine
  • Position your screen away from windows and shared spaces — shoulder surfing and visual interception of CUI is a physical security risk even at home, especially during video calls
  • Lock your screen when stepping away — household members, guests, and repair workers do not have authorization to see CUI

Smart Speakers & Voice Assistants: Devices like Amazon Echo, Google Home, and Apple HomePod are always-on microphones. Do not conduct work calls involving CUI, program names, or sensitive technical details in rooms where these devices are present. Several documented cases of corporate espionage have involved ambient audio capture from always-on consumer devices.

Mobile Device Basics — Phones, Tablets, and Work Data9.6

Smartphones are the most commonly lost and stolen devices in the world — and for most employees they are also the device most closely integrated with work systems through email, calendar, authentication apps, and messaging. That combination of high risk of loss and high work data access makes mobile device security a critical control.

  • Enable full-device encryption on any device that accesses work systems — both iOS and Android support full encryption, and it prevents data access if the device is seized or stolen
  • Enable remote wipe capability — ensure your device is enrolled in a mobile device management system that allows your organization to remotely wipe work data if the device is lost
  • Use a strong PIN or biometric lock — a 4-digit PIN offers 10,000 combinations; a 6-digit PIN offers 1,000,000. Use biometric (fingerprint/face) plus a strong PIN backup
  • Do not jailbreak or root work devices — removing manufacturer security restrictions eliminates multiple layers of protection built into the operating system
  • Install apps only from official stores — third-party app sources bypass the vetting process that catches the majority of malware before it reaches devices
  • Report lost or stolen devices immediately — within minutes, not hours. Remote wipe capabilities are most effective before an attacker has time to extract data

The Authenticator Appi on Your Phone: If your phone is also your MFA device, losing it means an attacker who finds it could potentially access your accounts if the phone is unlocked. Always keep your authenticator app behind biometric or PIN protection, and have a recovery plan with your security team for the scenario where your MFA device is lost.

Travel — Conferences, Customer Sites, and International Risk9.7

Business travel introduces security risks that do not exist at your office or home: unknown networks, physical access by hotel staff and border agents, spearphishingi campaigns designed around specific conferences, and in international travel — the legal authority of foreign governments to inspect devices at the border.

  • Request a travel loaner device for international trips — a clean device with minimal data reduces your exposure if it is seized or compromised abroad. Several US defense contractors have standing policies requiring loaner devices for travel to China, Russia, and other high-risk countries
  • Never plug into unknown USB ports — airport charging kiosks, hotel desk ports, and conference charging stations can deliver malware through a technique called juice jackingi. Use your own charger plugged into a standard power outlet, or use a USB data blocker
  • Be alert at defense conferencesAPT5i and other nation-state actorsi specifically target defense professionals at industry events. Unexpected approaches offering consulting work, academic collaboration, or business partnerships are documented recruitment techniques
  • Assume hotel room network is compromised — always use your VPN on hotel Wi-Fi, and consider using a cellular hotspot instead for sensitive work
  • Border crossing with work devices — consult your security officer before international travel with any device that contains or can access CUI. Some countries have legal authority to compel device decryption at the border
Your Remote Work Compliance Checklist9.8

Use this checklist every time you work outside the office on anything that involves CUI systems or data:

Before you start:
☐  Company-approved device — not personal
☐  VPN connected before opening any work application
☐  Network is trusted (home) or VPN is active (public)
☐  Screen not visible to unauthorized individuals

While working:
☐  No CUI saved to local device storage, personal cloud, or personal email
☐  Lock screen when stepping away — even at home
☐  No BYOD devices accessing CUI unless explicitly approved
☐  Sensitive calls not taken near smart speakers or in public spaces

If something goes wrong:
☐  Lost or stolen device → report immediately, do not wait
☐  Unexpected MFA prompt → deny and report
☐  Clicked a suspicious link → report immediately, do not delete anything
☐  Connected to a network you later suspect was malicious → report immediately

MODULE 09 QUIZ — Final Quiz — Minimum 80% to Pass
Question 1 of 5
You are working from a coffee shop and need to access a CUI document on the company server. The VPN is taking a while to connect. Is it acceptable to open the document first and connect the VPN once it is ready?
AYes — as long as the VPN is connected before you finish working
BYes — the document itself is encrypted so the connection type does not matter
CNo — the VPN must be connected before opening any work application or accessing any CUI
DYes — coffee shop Wi-Fi is generally safe for brief access
Question 2 of 5
What is an "evil twin" attack and where is it most likely to occur?
AA malware variant that creates a duplicate of your files — most common on unpatched home computers
BA fake Wi-Fi hotspot with a name matching a legitimate network, used to intercept connections — most common at conferences, airports, and hotels
CA social engineering attack where an attacker impersonates a colleague in person
DA phishing email that appears to come from your own email address
Question 3 of 5
Your personal phone has your work email configured on it and regularly syncs messages that contain CUI attachments. Under CMMC, what does this mean for your phone?
ANothing — personal devices are exempt from CMMC requirements regardless of what data they access
BThe phone is technically within the CMMC compliance scope and must meet applicable security requirements
CIt is only a problem if the phone is lost or stolen
DEmail is exempt from CUI handling requirements since it is transient communication
Question 4 of 5
You are traveling internationally for a defense conference. What is the safest approach for your work devices?
ABring your regular work laptop — your VPN will protect everything
BRequest a clean travel loaner device with minimal data and consult your security officer before departure
CUse your personal laptop instead — personal devices are not subject to foreign government inspection
DInternational travel poses no additional risk beyond domestic travel if you use HTTPS
Question 5 of 5
Smart speakers and voice assistants (Alexa, Google Home) are always-on microphones. Why does this make them a risk when working from home on defense-related calls — even though no CMMC rule explicitly bans them?
AThey interfere with VPN connections through Bluetooth signal interference
BThey can passively capture sensitive audio — ambient recording of CUI discussions is a documented intelligence collection technique, not just a hypothetical risk
CCMMC explicitly prohibits IoT devices within 10 feet of any CUI system
DThey automatically upload all audio to government servers for monitoring
Module 10 of 14

Social Media & Public Disclosure — What You Can and Cannot Post

A single LinkedIn post, a photo from the shop floor, a tweet about a business trip — any of these can expose program information, reveal your employer's defense contracts, or hand adversaries intelligence they could not obtain any other way. This module covers exactly where the line is.

Why Social Media Is a National Security Issue10.1

Nation-state actorsi routinely harvest open-source intelligence — publicly available information — from social media profiles of defense contractor employees. They do not need to breach your network if you voluntarily post the information they want. LinkedIn profiles, Facebook check-ins, Instagram photos, and Twitter/X posts have collectively revealed program names, facility locations, colleague networks, travel schedules, and technical details that are supposed to be protected.

This is not hypothetical. The FBI and DCSA have documented specific cases where foreign intelligence services built comprehensive profiles of defense programs and personnel entirely from open-source social media — without conducting a single hack.

The OSINT Threat: OSINTi (Open Source Intelligence) is the systematic collection of publicly available information to build intelligence pictures. A nation-state actor analyst monitoring defense contractor employees on LinkedIn can identify who works on which programs, who their colleagues are, when they travel, and what technical domains they work in — all from public posts. Your social media profile is a free intelligence briefing for anyone willing to read it.

What You Cannot Post — Hard Lines10.2

The following categories are never acceptable to post on any public platform, personal or professional, regardless of how innocuous they seem in isolation:

  • Program names or contract numbers — "working on the XYZ program" or "just landed a new DoD contract" tells adversaries which programs your organization supports
  • Photos from the work facility — even a selfie at your desk can reveal equipment, documents, screen content, facility layout, badge design, and colleague identities
  • Travel related to defense work — "heading to Electric Boati for a meeting" or "site visit to Groton this week" maps your organization's customer relationships
  • Technical details about your work — materials, processes, specifications, test results — even framed as general professional discussion
  • Colleague names and roles on defense programs — building an adversary's contact list for spearphishingi campaigns
  • CUIi in any form — screenshots, photos of documents, paraphrased specifications. CUI does not become uncontrolled because you put it on Facebook
  • Security procedures or access controls — badge systems, visitor policies, security checkpoints, even complaints about security inconveniences
The LinkedIn Problem — Professional Profiles and Program Exposure10.3

LinkedIn is the most significant social media risk for defense contractor employees because it is professional, it is trusted, and it is where people are most likely to be specific about what they work on. A detailed LinkedIn profile describing your role on defense programs is a targeting document for foreign intelligence services.

APT5 LinkedIn Targeting — Documented Pattern

APT5 specifically uses LinkedIn to identify and approach defense contractor employees. The pattern: find employees whose profiles indicate work on high-value programs, send a connection request followed by a message offering consulting work, academic collaboration, or a job opportunity, then use that relationship to elicit program information. The initial LinkedIn approach is the first step in a recruitment or spearphishing campaign. Your LinkedIn profile is their targeting list.

  • Describe your role in general terms — "aerospace manufacturing" not "F-35 component fabrication"
  • Do not list program names or contract numbers in your experience section
  • Be cautious with connection requests from unknown individuals in aerospace, defense, or government sectors — especially if they are based overseas or have sparse profiles
  • Report unusual LinkedIn approaches to your security officer — an unsolicited message offering consulting work related to your defense programs is a reportable contact
OPSECi — Operational Security for Everyday Life10.4

OPSEC (Operational Security) is the practice of protecting information that could be combined with other information to reveal something sensitive. Individual pieces of information that seem harmless in isolation can be dangerous in combination. A photo of your office window view + a LinkedIn post about your employer + a Facebook check-in at an airport + a tweet about "big week ahead" collectively tells an adversary your facility location, your employer, when you are traveling, and that something significant is happening — enough to time a targeted attack.

  • Think in aggregates, not individual posts — ask not "is this post sensitive?" but "what does this reveal when combined with everything else I have posted?"
  • Be careful with location data — geotagged photos, check-ins, and "currently in [city]" posts create a movement pattern that intelligence analysts can map
  • Work travel is particularly sensitive — posting about business travel reveals customer relationships, program timelines, and your absence from the facility
  • Family members' posts can also expose you — a spouse posting "proud of [name] working on [program]!" is the same disclosure regardless of who posts it

The Aggregation Problem: Your name + your employer + your job title + your city + your physical description from a photo = enough for a targeted spearphishing email that references your real name, your real employer, and a real conference in your real city. Every piece you add to your public profile makes the next attack more convincing.

What You Can Post — and the Approval Process10.5

Not everything about your professional life is off-limits. The goal is not silence — it is discipline. General information about your professional field, your skills, and your employer's public-facing activities is typically acceptable. Specific program information, technical details, and facility information is not.

  • Generally acceptable: Your general professional field and skills, publicly announced contract awards (after official press release), general company culture, professional certifications, industry events you attended
  • Requires approval: Any post that mentions a specific DoD program, contract, or customer relationship — even positive announcements. Check with your supervisor or communications team before posting
  • Never acceptable: Anything in the hard-lines list from slide 10.2, regardless of how general or innocuous it seems to you

When in Doubt, Do Not Post. The cost of not posting something is zero. The cost of posting something that exposes CUI, program information, or security procedures can include loss of contract eligibility, FCAi liability, ITARi violations, and personal legal consequences. If you are uncertain whether a post is acceptable, ask your supervisor before posting — not after.

MODULE 10 QUIZ — 5 Questions — Minimum 80% to Pass
Question 1 of 5
A colleague posts on LinkedIn: "Excited to be working on the [specific DoD program name] contract at [your company]! Big week ahead." Why is this a security concern?
AIt is not a concern — LinkedIn is a professional platform and this is normal professional sharing
BIt reveals the program name, employer, timing, and identifies a contact for adversary targeting — all from one post
CIt is only a concern if the post includes a photo
DIt is a concern only if the program is classified
Question 2 of 5
APT5 sends you a LinkedIn connection request followed by a message offering paid consulting work related to your technical specialty. What should you do?
AAccept and engage — consulting is common in the industry and the pay could be useful
BDecline politely and report the approach to your security officer
CIgnore it — no need to report since you did not respond
DAsk them to send a formal proposal before deciding
Question 3 of 5
What is the "aggregation problem" in the context of social media and OPSEC?
AThe risk that too many people follow your account and increase your visibility
BIndividual harmless posts combined together can reveal sensitive information that none revealed alone
CSocial media platforms aggregate your data and sell it to advertisers
DToo many posts about work can violate your employment contract
Question 4 of 5
Your company just won a new DoD contract and you want to post about it on LinkedIn. What is the correct approach?
APost immediately — contract awards are public information once signed
BCheck with your supervisor or communications team before posting — even positive announcements require approval
CPost only the contract value, not the program name
DPost after 30 days — the standard security hold period for contract announcements
Question 5 of 5
Which of the following is generally acceptable to post on professional social media?
AA photo taken at your desk showing the general office environment
BAn announcement that you are traveling to a defense customer site next week
CA post about earning a professional certification in your general field
DA technical description of a manufacturing process you use on a defense program
Module 11 of 14

Physical Security & Visitor Control — Badges, Tailgating, and the Clean Desk

Cybersecurity controls protect your network. Physical security controls protect the space where that network lives — and where CUIi is printed, discussed, and worked on every day. A determined adversary who can walk into your facility has bypassed every technical control you have implemented.

Why Physical Security Is a CMMC Requirement11.1

NIST SP 800-171i Control Family 3.10 — Physical Protection — requires that organizations limit physical access to systems containing CUI to authorized individuals, protect systems from unauthorized physical access, and maintain audit records of physical access. These are not soft guidelines. They are assessed controls that a C3PAOi will verify during certification.

Physical access bypasses every cybersecurity control in your environment. An attacker who can sit at an unlocked workstation, photograph a printed CUI document, or plug a device into an unattended machine has circumvented your firewall, your EDRi, your SIEMi, and your MFAi simultaneously.

The Insider Can Walk In. So Can the Adversary. Physical security failures are not just about external intruders. The most common physical CUI exposures involve authorized personnel — an employee who left a document on the printer, a visitor who was not escorted, a delivery person who was allowed access to an area they should not have been in. Physical security is behavioral, and it requires every person in the facility to enforce it.

Badge Discipline — The First Line of Physical Defense11.2

Your access badge is a physical credential — the physical equivalent of a username and password. Every rule that applies to digital credentials applies to your badge.

  • Wear your badge visibly at all times in controlled areas — it allows colleagues and security personnel to immediately identify authorized versus unauthorized individuals
  • Never loan your badge to anyone — not to a colleague who forgot theirs, not to a visitor who needs temporary access, not to anyone. If someone needs access, they need to obtain it through the proper process
  • Report a lost or stolen badge immediately — within minutes, not at the end of the day. A lost badge is a security incident. Report it to your security officer before you do anything else
  • Do not prop doors open — even briefly, even for a delivery, even when you are coming right back. A propped access door is an open perimeter breach for its entire duration
  • Challenge unescorted individuals without badges — politely but directly. "Can I help you find someone?" is sufficient. Unescorted, unbadged individuals in controlled areas are a reportable security observation

Tailgating: Tailgatingi — following an authorized person through a secured door without presenting your own credentials — is one of the most common physical security breaches in corporate environments. It feels rude to close a door in someone's face. That social discomfort is exactly what attackers exploit. Every person must badge through independently, every time.

Visitor Control — Escort Requirements and Authorization11.3

Visitors — including vendor representatives, auditors, maintenance personnel, delivery staff, and customers — who enter areas where CUI is processed or stored must be authorized, logged, and escorted. These are not optional courtesies. They are CMMCi-required access controls.

  • Verify visitor identity before granting access — a government-issued ID should be checked against the expected visitor log
  • Issue a visitor badge visually distinct from employee badges — visitors should be identifiable at a glance in any controlled area
  • Escort visitors at all times in areas containing CUI systems or documents — "escort" means physical presence, not telling them which way to go
  • Retrieve visitor badges on departure — a visitor badge that leaves the building is a physical access credential that may still work on re-entry
  • Secure CUI before visitors enter the work area — lock screens, turn documents face-down, close sensitive applications

The Maintenance Exception Trap: HVAC technicians, electricians, and IT vendors are among the most common unescorted visitors in controlled areas — because their work is disruptive to escort, because they arrive unexpectedly, and because people assume they are "just fixing something." These individuals have physical access to every area they enter. They must be escorted and CUI must be secured before they enter, regardless of how inconvenient this is.

Clean Desk Policy — Physical CUI at Your Workstation11.4

The clean desk policy requires that CUI — physical and digital — is secured when you are not actively working with it. This is an assessed CMMC control. A C3PAO assessor walking through your facility during the assessment will observe workstation practices as evidence of compliance.

  • Lock your screen every time you step away — Windows: Win+L. Mac: Ctrl+Cmd+Q. No exception for "I'll be right back"
  • Secure printed CUI in a locked drawer or cabinet when not in active use — not face-down on the desk, not in a stack that "nobody will look at"
  • Clear the printer immediately after printing CUI documents — documents left at the printer are visible to every person who walks past
  • Shred before discarding — any printed document containing CUI must be cross-cut shredded or placed in a secure destruction bin. It does not go in the recycling bin or the trash
  • Whiteboard and notepad disciplineCUI written on whiteboards or notepads during meetings must be erased or secured before the room is vacated
Reporting Physical Security Observations11.5

Physical security depends on every employee acting as an observer. Security cameras cover a fraction of what human eyes cover. The most common physical breaches are caught — or not caught — by employees who noticed something and either reported it or did not.

  • Report unescorted individuals without badges in controlled areas — to your FSOi or security officer, immediately
  • Report suspicious behavior — someone photographing facility layouts, equipment, or documents; someone attempting to access areas they are not authorized for; someone removing items from the facility without apparent authorization
  • Report lost or missing CUI documents — a missing printed CUI document is a reportable incident under DFARSi 252.204-7012 if there is reason to believe it left the facility
  • Report found or abandoned media — a USB drive found in the parking lot, a document left in a conference room, a device left unattended in a public area. Do not plug in found USB drives. Report them.

See Something, Say Something — It Is a Compliance Requirement: Under NIST SP 800-171 Control 3.10, physical security reporting is not a cultural nicety — it is a documented control requirement. Your organization's CMMC certification depends in part on employees fulfilling the human element of physical security. Report what you observe.

MODULE 11 QUIZ — 5 Questions — Minimum 80% to Pass
Question 1 of 5
A colleague forgot their badge and asks you to badge them into the secure area since they work there every day. What is the correct response?
ABadge them in — they are a known colleague and it would be rude to refuse
BBadge them in once but remind them to bring their badge next time
CDirect them to security to obtain a temporary badge through the proper process — every person must badge through independently
DBadge them in if a supervisor authorizes it verbally
Question 2 of 5
What is tailgating in a physical security context and why is it dangerous?
AFollowing someone too closely in traffic — dangerous because it distracts security personnel
BFollowing an authorized person through a secured door without badging — dangerous because it allows unauthorized access while appearing normal
CAccessing another employee's computer from behind their desk
DReading documents over a colleague's shoulder without permission
Question 3 of 5
You are leaving a meeting room where CUI documents were reviewed. Before you leave, what must you verify?
AThat the lights are off to conserve energy
BNothing — the cleaning staff will handle any leftover materials
CThat all printed CUI is accounted for and secured, whiteboards are erased, and no documents are left behind
DThat the door is locked — physical security of the room is sufficient
Question 4 of 5
You find a USB drive in the parking lot near your facility entrance. What should you do?
APlug it into your computer to check if it belongs to a colleague
BLeave it where you found it — it is not your responsibility
CPick it up and report it to your security officer without plugging it in
DPlug it into a personal device rather than a work device to check its contents safely
Question 5 of 5
Under the clean desk policy, what must you do every time you step away from your workstation — even briefly?
ALog out completely and shut down your computer
BLock your screen — Win+L on Windows, Ctrl+Cmd+Q on Mac
CNothing if you will return within five minutes
DTurn your monitor off — this prevents anyone from seeing your screen
Module 12 of 14

Software, Updates & Configuration — Why What You Install Matters

Unauthorized software, missed patches, and misconfigured systems are among the most common entry points for attackers into defense contractor networks. This module covers what you can and cannot install, why updates are mandatory not optional, and what shadow ITi costs your organization.

The Software Authorization Requirement12.1

NIST SP 800-171i Control 3.4 — Configuration Management — requires that organizations establish and maintain baseline configurations for their systems and control changes to those configurations. A core component of this is maintaining an inventory of authorized software and prohibiting the installation of unauthorized software.

This means that on any company-issued device or system within your CMMCi scope, you may only install software that has been reviewed, approved, and added to your organization's authorized software list. Installing software outside that list — regardless of how useful or harmless it seems — is a compliance violation that can affect your organization's CMMC certification status.

Why Every Unauthorized Install Is a Risk: Every piece of software installed on a system expands its attack surfacei. Unauthorized software may contain malwarei, may introduce unpatched vulnerabilities, may create unmonitored network connections, or may access data on the system that it has no business touching. Your IT team's software approval process exists specifically to evaluate these risks before they reach your machine.

Shadow IT — The Hidden Compliance Crisis12.2

Shadow IT is technology used within an organization without official IT knowledge or approval. It includes unauthorized software, personal cloud storage used for work files, consumer messaging apps used for work communications, and web-based tools accessed through a browser without IT review.

How Shadow IT Creates Compliance Gaps

An engineer at a defense subcontractor starts using a free online file converter to quickly process CAD drawings — it is faster than the approved tool. The converter is a web service that uploads files to a third-party server to process them. The CAD drawings contain CUI. The engineer has just transmitted CUI to an unapproved external server with no audit trail, no encryption verification, and no knowledge of where those files are stored. The converter's terms of service may allow it to retain and use uploaded content. This is a CMMC compliance violation, a potential DFARS violation, and possibly an ITAR violation — all from trying to save five minutes.

  • Common shadow IT risks: Personal Dropbox/Google Drive for work files, WhatsApp/Telegram for team communication, free online tools (file converters, PDF tools, translation services), browser extensions not reviewed by IT
  • If an approved tool does not meet your needs: Report it to your supervisor and IT team so an approved alternative can be evaluated — do not work around it
Why Security Updates Are Not Optional12.3

Every software update notice you dismiss or postpone is a potential window an attacker can climb through. NIST SP 800-171 Control 3.14 requires that systems be protected against malware and that security-relevant software updates are installed promptly. Most major breaches in the past decade have exploited vulnerabilities for which patches were already available — the breach succeeded not because the vulnerability was unknown, but because the patch had not been applied.

60%of breaches exploit vulnerabilities where a patch existed but was not applied
15 daysAverage time from public vulnerability disclosure to active exploitation by attackers
<72 hrsTime nation-state actors begin scanning for newly disclosed critical vulnerabilities

When Your IT Team Pushes an Update: Install it promptly. Do not dismiss update prompts. Do not set "remind me in 7 days" repeatedly. If an update requires a reboot at an inconvenient time, schedule it for after hours — but do not skip it. A missed critical patch is a known open door that attackers actively scan for.

Configuration Discipline — Settings You Should Never Change12.4

Your organization's IT team has configured your work devices to meet CMMC requirements. Many of those configuration settings are security controls — they exist to limit what software can run, what network connections can be made, and what data can leave the device. Changing these settings — even to make something work more conveniently — can undermine the security baseline your organization was certified against.

  • Do not disable antivirus or EDRi software — even temporarily, even because it seems to be slowing your computer down. If security software is causing a problem, report it to IT
  • Do not disable the firewall — a disabled firewall removes the primary network-level barrier between your device and the internet
  • Do not disable automatic updates — your IT team controls the update policy for a reason. If updates are disruptive, report the issue to IT rather than disabling the mechanism
  • Do not change network proxy or DNSi settings to bypass content filtering — content filters are security controls, not just productivity monitoring
  • Do not enable administrator access on your own account unless your role specifically requires it and IT has approved it
Browser Extensions — The Hidden Attack Surface12.5

Browser extensions deserve specific attention because they are widely misunderstood as harmless productivity tools. A browser extension runs inside your browser with access to every page you visit, every form you fill out, and every credential you enter. A malicious or compromised extension is effectively malware with privileged access to your browsing session.

  • Install only IT-approved browser extensions on work devices — treat extension installation with the same discipline as software installation
  • Extensions can be compromised after installation — a legitimate extension purchased by a malicious actor and updated with new code is one of the most common supply chain attack vectors for browser-based espionage
  • Review permissions before installing anything — an extension that requests "read and change all your data on all websites" has access to everything you do in your browser, including CUI in web-based portals
  • Remove extensions you no longer actively use — unused extensions that remain installed are an ongoing exposure with no offsetting benefit

The Rule: On work devices, if IT did not install it and IT did not approve it, it should not be on the device. This applies to software, browser extensions, mobile apps, and cloud services used for work. When in doubt, ask IT before installing — not after something goes wrong.

MODULE 12 QUIZ — 5 Questions — Minimum 80% to Pass
Question 1 of 5
You want to install a free productivity app on your work laptop that is not on the approved software list. It has excellent reviews and you are confident it is safe. What should you do?
AInstall it — excellent reviews indicate it is safe and your judgment is sufficient
BInstall it on a personal device only and use it alongside your work device
CRequest IT approval before installing — submit it for review and use the approved version once cleared
DInstall it temporarily and remove it once finished — temporary installs do not require approval
Question 2 of 5
An engineer uploads CUI technical drawings to a free online file conversion tool to quickly process them. Why is this a compliance violation even though the tool appears legitimate?
AIt is not a violation if the tool uses HTTPS encryption
BCUI was transmitted to an unapproved external server outside the CMMC-assessed environment with no audit trail
CIt is only a violation if the tool is based overseas
DFile conversion tools are explicitly prohibited under DFARS regardless of the content processed
Question 3 of 5
Your computer has been prompting you to install a security update for two weeks. You keep postponing it because it requires a reboot. What is the correct action?
AContinue postponing until a convenient time — the patch will still work when you eventually install it
BSchedule the reboot for after hours and install the update promptly — postponing patches leaves known vulnerabilities open
CDisable automatic updates so the prompt stops appearing
DWait for IT to force the update — it is their responsibility, not yours
Question 4 of 5
Why are browser extensions a significant security risk on work devices?
AThey slow down the browser, which can cause productivity losses
BExtensions run inside the browser with access to every page, form, and credential — a compromised extension is effectively malware
CExtensions are banned under CMMC Level 2 entirely
DThey consume bandwidth that could be used by security monitoring tools
Question 5 of 5
Your antivirus software is running a full scan that is slowing your computer significantly. You want to temporarily disable it to finish a time-sensitive task. Is this acceptable?
AYes — briefly disabling antivirus for a critical task is an acceptable trade-off
BYes — as long as you re-enable it within one hour
CNo — report the performance issue to IT and let them resolve it without disabling security controls
DYes — antivirus only matters when downloading files, not during normal work
Module 13 of 14

AI Toolsi & CUIi Risk — ChatGPT, Copilot, and the Compliance Gap Nobody Is Talking About

Generative AI tools have become standard professional tools in months. Most defense contractors have no policy on them yet. The risk is real, it is happening today at organizations like yours, and it is the fastest-growing unaddressed CUI exposure in the defense supply chain. This module covers what you need to know right now.

The AI CUI Problem — What Is Actually Happening13.1

Generative AI tools — ChatGPT, Microsoft Copilot, Google Gemini, Claude, and others — are being used by defense contractor employees to draft reports, summarize documents, analyze data, write code, and translate technical specifications. In many cases, employees paste the content they are working with directly into the AI tool's input window. That content frequently contains CUI.

The Samsung Precedent — AI Data Leakage at Scale

In 2023, Samsung engineers pasted proprietary source code and internal meeting notes into ChatGPT to help debug and summarize them. The content was transmitted to OpenAI's servers, potentially used for model training, and became inaccessible to Samsung from a data-control perspective. Samsung subsequently banned ChatGPT on corporate devices. Defense contractors face the identical risk — except the stakes involve CUI, ITAR-controlled technical data, and potential FCA liability rather than corporate IP.

The Core Problem: When you paste CUI into a commercial AI tool, that information leaves your organization's CMMCi-assessed environment and is transmitted to a third-party commercial server. Depending on the tool's data retention policies, it may be stored, used for model training, or accessible to the tool's employees. There is no FedRAMPi authorization, no CUI handling agreement, no audit logi, and no way to recall it.

Why There Is No Clear Guidance Yet — and Why That Makes It More Dangerous13.2

As of March 2026, the DoD has not issued definitive guidance on whether submitting CUI to commercial AI tools constitutes a DFARSi violation. This absence of explicit guidance does not mean it is acceptable — it means the question has not been formally resolved. The underlying CMMC and DFARS requirements have not changed: CUI must be processed only on systems that meet the required security baseline.

The lack of guidance creates a false sense of safety. Employees reason: "if it were prohibited, there would be a policy saying so." But the absence of an explicit prohibition does not create authorization. The existing CUI handling requirements apply to all systems — including commercial AI tools that have not been assessed, approved, or authorized.

The FCA Exposure Is Real: If a defense contractor employee submits CUI to an unauthorized AI tool, and that disclosure is later discovered — through a SIEMi alert, a whistleblower, or a C3PAOi assessment — the organization faces potential FCA liability for the unauthorized CUI disclosure. Two years from now, this is likely to be the source of the next wave of cybersecurity enforcement actions. The MORSECORP case started with behavior that seemed routine at the time.

What You Cannot Do With AI Tools — The Hard Lines13.3

Until your organization has evaluated and approved specific AI tools for CUI-adjacent work, the following are the hard lines that apply to any commercial AI tool — including free and subscription versions of ChatGPT, Copilot, Gemini, Claude, and any other generative AI service:

  • Never paste CUI directly into any commercial AI tool — technical specifications, drawings descriptions, contract language, program names, or any other controlled information
  • Never upload documents containing CUI to an AI tool's file upload feature — the document is transmitted to the tool's servers in its entirety
  • Never use AI tools to summarize, translate, or analyze CUI documents — even if you believe you are paraphrasing or anonymizing the content, the underlying information may still be identifiable
  • Never describe CUI in enough detail to reconstruct it — "help me write a report about a titanium alloy rotor component with [specific tolerance] for [specific defense program]" is a CUI disclosure even without copy-pasting a document

Microsoft Copilot Is Not an Exception: Microsoft Copilot integrated into Microsoft 365 may appear to be a "corporate" tool — but the commercial version of Copilot does not automatically meet FedRAMP requirements or CUI handling standards. Only specifically approved government or enterprise configurations with documented CUI handling agreements are potentially acceptable. Check with your IT team before using Copilot on any CUI-adjacent work.

What You Can Do — Legitimate AI Use Without CUI Risk13.4

Restricting AI tool use for CUI does not mean you cannot use these tools at all. There is a clear line between work that involves controlled information and work that does not.

  • Generally acceptable: Using AI to draft general professional communications that contain no CUI, research publicly available information, improve grammar or structure of non-sensitive documents, generate ideas or outlines for non-sensitive content
  • Use approved internal tools when available: Some organizations deploy enterprise AI tools with appropriate data handling controls. If your organization has made such a tool available, understand its authorization scope before using it for CUI-related work
  • The mental test: Before using any AI tool for a work task, ask: "Does what I am about to type or upload contain, describe, or reference CUI, program names, technical specifications, or contract information?" If yes — stop and use approved tools only

Report Uncertainty to Your Security Team: If you are unsure whether a specific AI use case is acceptable, ask your security officer or IT team before proceeding. The AI compliance landscape is evolving rapidly — your organization may have more recent guidance than this training reflects. When in doubt, ask.

Data Retention, Model Training, and Why You Cannot Take It Back13.5

Understanding what happens to data after you submit it to a commercial AI tool helps explain why the prohibition is permanent, not just temporary. Unlike an email you can recall or a file you can delete from a shared drive, data submitted to an AI tool enters a system you cannot control.

  • Data retention: Most commercial AI services retain conversation history for 30 days to indefinitely depending on account settings and tier. Enterprise accounts may have different retention policies — but free and standard accounts typically do not offer CUI-appropriate controls
  • Model training: Some services use conversation data to improve their models. Content you submit may influence future model outputs. Even if the tool does not expose your specific submission to other users, the information has left your control
  • No audit trail: Unlike your organization's email or file systems, there is no audit log you or your security team can review to see what was submitted to an external AI tool. This makes detection, investigation, and remediation significantly harder
  • No recall mechanism: Once CUI is submitted to an external service, it cannot be retrieved or deleted from that service's infrastructure with any certainty

The Standard to Apply: Would you email this content to a personal Gmail account? Would you upload it to a personal Dropbox? If not — and you should not — then you should not submit it to a commercial AI tool either. The compliance analysis is identical. The data leaves your organization's control and enters an unauthorized third-party system.

MODULE 13 QUIZ — 5 Questions — Minimum 80% to Pass
Question 1 of 5
You need to write a report summarizing a CUI technical specification. You plan to paste the specification into ChatGPT to help draft the summary. Why is this a compliance problem?
AIt is not a problem — ChatGPT uses end-to-end encryption so the data is protected
BThe CUI leaves your CMMC-assessed environment and is transmitted to an unauthorized third-party server with no CUI handling agreement
CIt is only a problem if ChatGPT's servers are located outside the United States
DThe problem is only the output — the input data itself is not retained by ChatGPT
Question 2 of 5
The DoD has not yet issued explicit guidance prohibiting CUI submission to commercial AI tools. Does this mean it is acceptable?
AYes — if there is no explicit prohibition, it is permitted until guidance says otherwise
BNo — the absence of explicit prohibition does not create authorization; existing CUI handling requirements apply to all systems including AI tools
CYes — only actions explicitly listed in DFARS are prohibited
DYes — commercial AI tools are classified as COTS products and are therefore exempt from CUI requirements
Question 3 of 5
Microsoft Copilot is integrated into your Microsoft 365 subscription at work. Is it safe to use it to help draft a report that references CUI program details?
AYes — Microsoft is a US company and Copilot automatically meets FedRAMP requirements
BYes — Microsoft 365 is an approved platform so all features are automatically compliant
CNot without verification — only specifically approved government or enterprise configurations with documented CUI handling agreements are potentially acceptable
DYes — Copilot processes data locally on your device without transmitting it
Question 4 of 5
Which of the following AI tool uses is generally acceptable without CUI risk?
AUsing AI to summarize a CUI document you uploaded to the tool
BUsing AI to draft a general professional email with no CUI content
CDescribing a defense program's technical requirements in detail so AI can help write a proposal
DUsing AI to translate a CUI document from English to another language
Question 5 of 5
Why is CUI submitted to a commercial AI tool particularly difficult to remediate compared to, say, a CUI document accidentally emailed to the wrong recipient?
AAI tools are faster so the data spreads more quickly than email
BThere is no recall mechanism, no audit trail, potential for model training use, and indefinite data retention outside your organization's control
CAI tools are legally required to retain data for seven years under federal regulations
DAI tools automatically share data with government regulators who monitor for CUI violations
Module 14 of 14 — Annual Recertification

Annual Recertification — The Highest-Risk Topics Refreshed

This module satisfies your annual recertification requirement under NIST SP 800-171i Control 3.2. It covers the highest-risk topics from the full curriculum — the behaviors most commonly involved in defense contractor security incidents — refreshed with the latest documented cases and emerging threats.

What Has Changed Since Your Last Training14.1

The threat environment facing defense contractors evolves continuously. Since the original onboarding training was completed, several significant developments have reinforced the core principles you learned:

  • FCAi enforcement has accelerated: The DoJ has now settled nine cybersecurity False Claims Act cases since 2021. The Illinois machining subcontractor settlement in December 2025 confirmed that small manufacturers are active enforcement targets, not just large contractors. Every SPRSi affirmation your organization's senior official signs carries the same legal exposure as MORSECORP's.
  • AI toolsi risk is new and growing: The use of generative AI tools in defense contractor workflows has grown significantly. If you completed onboarding before this topic was covered, review Module 13 in full. Submitting CUIi to commercial AI tools is the fastest-growing unaddressed compliance exposure in the supply chain.
  • Phase 2 deadline is approaching: Mandatory C3PAOi third-party certification begins November 10, 2026. If your organization has not begun its readiness process, that process should be active now.
  • APT5i targeting of personal accounts continues: Spearphishing campaigns targeting defense employees' personal Gmail and LinkedIn accounts are ongoing. The patterns described in Module 3 remain active threats.
The Six Behaviors That Prevent Most Breaches14.2

Research on defense contractor breaches consistently shows that the majority of incidents involve one or more of six behavioral failures. These are the highest-leverage points where individual employee behavior directly determines security outcomes:

1. Clicking phishing links — especially in personal email. APT5 sends fake conference invitations to personal Gmail. Navigate directly to websites — never click links in unexpected emails.

2. Password reuse — one breach at any site gives attackers credentials for all sites where you reused that password. Every account needs a unique password. Use a password manager.

3. Approving unexpected MFAi prompts — an MFA request you did not initiate means someone has your password right now. Deny it and report immediately.

4. Moving CUI outside approved systems — personal email, personal cloud storage, commercial AI tools, USB drives. CUI stays inside approved environments only.

5. Not reporting incidents — the 72-hour reporting clock starts at discovery. Unreported breaches compound liability. Report immediately — you will not be punished for reporting in good faith.

6. Leaving CUI unsecured — unlocked screens, printed documents at printers or on desks, physical access by unauthorized visitors. Lock your screen every time you step away.

The Threat Landscape Update — What Is Active Right Now14.3

As of March 2026, the following threat actorsi and campaigns are actively targeting the US defense supply chain:

  • Volt Typhooni (China): Pre-positioned access in US critical infrastructure supporting defense facilities. Using LOTLi techniques. Some footholds established in 2019 are still active. Focused on long-term intelligence collection and positioning for potential disruption.
  • APT5 (China): Ongoing spearphishing of defense contractor employees via personal email and LinkedIn. Fake conference invitations, consulting offers, and job opportunities. Targeting personal accounts specifically to bypass corporate security controls.
  • Handala (Iran): Confirmed destructive attack on Stryker Corporation in March 2026 — 200,000 devices wiped across 79 countries via one compromised admin credential. Demonstrated willingness to conduct destructive attacks against US defense-adjacent organizations.
  • Russian SVR/GRU: Ongoing credential harvesting and spearphishingi against cleared defense contractors. Targeting weapon systems development, C2 systems, and combat support programs.

The Common Thread: Every active campaign relies on a human action — clicking a link, approving a fake MFA prompt, connecting to a rogue Wi-Fi network, leaving a default password unchanged. Technical controls stop what technical controls can stop. Human behavior stops the rest. Your awareness is an active defense.

Your Reporting Obligations — A Refresher14.4

Reporting obligations under CMMCi and DFARSi require action in three distinct scenarios. Each has a different trigger and a different required action:

Cyber Incident (72-hour rule): Any actual or suspected unauthorized access to systems that process CUI, suspected exfiltrationi of CUI, malwarei on CUI systems, or ransomwarei affecting covered systems. Report to your FSOi immediately. Organization must file DIBNeti report within 72 hours of discovery.

Suspicious Contact: Any approach — in person, via LinkedIn, via email, at a conference — that attempts to elicit information about your employer's programs, personnel, or technical work in a way that seems unusual. Report to your FSO within 24 hours.

Insider Threati Indicator: Behavioral observations consistent with the indicators covered in Module 6 — unusual data access patterns, unexplained financial changes, contact with foreign nationals, attempts to access unauthorized areas. Report factual observations to your FSO. Do not investigate, do not confront, do not discuss with colleagues first.

Your Commitment — What This Training Means14.5

Completing this training — all 14 modules — means you have been educated on the specific threats targeting your industry, the legal framework that governs your organization's compliance obligations, and the specific behaviors required to protect the work you do and the programs you support.

The defense programs your organization contributes to — whether they are submarine components, aircraft engines, helicopter parts, or support systems — are ultimately used by people in uniform in situations where their effectiveness and security directly affect lives. The machinist who leaves a CUI technical drawing on an unsecured printer is not just creating a compliance gap. They are potentially contributing to a chain of events that ends in a compromised weapon system, a nation-state adversary with knowledge they should not have, or a program that fails to protect the people depending on it.

The Standard Is Not Perfection — It Is Awareness and Reporting: You will not remember every control number. You will not always instantly recognize every threat. What the standard requires is that you know enough to pause when something seems wrong, that you know who to tell when it does, and that you tell them immediately. That pause and that report are the human layer of a national defense system. They matter.

Your completion certificate for this full 14-module curriculum is waiting. Thank you for completing this training.

MODULE 14 QUIZ — Final Recertification Quiz — Minimum 80% to Pass
Question 1 of 5
Which of the six highest-risk behaviors is most directly associated with the APT5 campaign targeting defense contractor employees?
ANot reporting security incidents within 72 hours
BClicking phishing links — specifically in personal email accounts that bypass corporate security controls
CMoving CUI to personal cloud storage
DTailgating through secured physical access points
Question 2 of 5
A colleague approaches you and says they think they may have accidentally pasted some CUI into ChatGPT yesterday but they are not sure it was actually CUI. What should they do?
AWait to see if anything happens before reporting — it may not have been CUI
BDelete the ChatGPT conversation history to prevent further exposure
CReport it to the security officer immediately — uncertainty about whether it was CUI is not a reason to delay reporting
DReview the ChatGPT terms of service to determine if the data is protected
Question 3 of 5
Looking across all the breaches covered in this training — F-35, Stryker, MORSECORP, Volt Typhoon — what is the single most consistent human behavior failure that enabled each one?
AFailure to purchase expensive security software tools
BFailure by IT departments to configure systems correctly
CA specific human action or inaction — an unchanged password, an unsigned log entry, a clicked link, an unreported score — that a person could have caught or prevented
DFailure to obtain CMMC Level 3 certification rather than Level 2
Question 4 of 5
A professional you do not know sends you a LinkedIn message saying they work at a major aerospace company and want to discuss a potential consulting arrangement related to your technical specialty. What is the correct first action?
AReply asking for more details about the opportunity before deciding
BIgnore the message — no action needed since you did not respond
CReport the approach to your security officer — unsolicited consulting offers related to your defense work are a documented recruitment pattern
DAccept the connection and request a video call to verify their identity before discussing anything
Question 5 of 5
Which statement best describes your role in your organization's CMMC compliance?
ACMMC compliance is the IT department's responsibility — your role is only to follow their instructions when asked
BYour role is limited to not clicking phishing links — everything else is handled by security software
CYou are the human layer of the security architecture — your daily behaviors around CUI handling, reporting, and physical security are active compliance controls that no technical tool can replace
DYour role is to complete this training annually — no other specific behaviors are required
🎓
All 14 Modules Complete — Self-Study Guide

Training Complete — Enter Your Name

Your name will appear on the completion certificate


CMMC Cybersecurity Awareness — Self-Study Guide  ·  Yana Ivanov  ·  For educational use only ← Portfolio