⚠ Intentionally desktop-first — best experienced on a workstation
Portfolio
Threat Analysis · Supply Chain · Part 2 of 2

The Blast Radius —
Criminal Ecosystems, the Ransom Economy, and the Cost of Looking Away

Analyst
Yana Ivanov
Published
March 2026
Classification
Public — Educational
Follows
The Trusted Channel · Part 1
Threat Actor
TeamPCP · Vect · LAPSUS$
Scope
Global · Active Campaign
Continues from Part 1 · 500,000+ corporate identities stolen · Ransomware affiliate network now active
Section 01

The numbers nobody put on the front page

This report is Part 2 of a two-part analysis. Part 1 — The Trusted Channel covered how TeamPCP executed the nine-day credential-chaining campaign that ended with the Telnyx Python package being poisoned on March 27, 2026. This report examines what it means.

Most coverage of the TeamPCP campaign focused on the technical mechanics — the WAV steganography, the version mismatch that gave it away, the immediate remediation steps. Those things matter. But they are not the story. The story is the scale, who is behind it, what they are doing with what they took, and what it exposes about the systems we have built to defend ourselves.

Start with the numbers.

Figure 1 — The Full Campaign Scale
500K+
Corporate identities stolen
Across the full Trivy · LiteLLM · Telnyx chain
300GB
Compressed credentials
Exfiltrated — actively being distributed now
95M
LiteLLM monthly downloads
AI infrastructure blast radius — every major LLM provider in scope
300K
Ransomware affiliate keys
Distributed to BreachForums users — largest affiliate program ever documented

To put those numbers in context: LockBit — the ransomware operation that dominated headlines for three years and caused billions in damage — operated with 73 affiliate accounts. TeamPCP and Vect just handed ransomware keys to 300,000 people. That is not a ransomware group. That is a ransomware economy.

The 500,000 corporate identities figure comes from TeamPCP's own communications, so treat it with appropriate skepticism — criminal groups routinely exaggerate. But even if the real number is half that, this campaign represents the largest documented credential theft operation from the open-source software supply chain in history. The credentials stolen from LiteLLM alone — 95 million monthly downloads, direct dependency of CrewAI, DSPy, Mem0, and five other major AI frameworks — include API keys for OpenAI, Anthropic, Google Vertex, and AWS Bedrock from thousands of enterprise environments.

These are not sitting in a database waiting to be used someday. The FBI Assistant Director of Cyber Division stated publicly: "Given the volume of stolen credentials across likely thousands of downstream environments, expect an increase in breach disclosures, follow-on intrusions, and extortion attempts in the coming weeks." That statement was made on March 24. The Telnyx compromise happened three days later. The campaign was still running when he said it.

Section 02

The ransom economy — and why paying is bad for everyone

The official guidance from every major cybersecurity body — CISA, the FBI, the UK's NCSC — is clear and consistent: do not pay ransoms. The reasoning is straightforward. Payment funds future attacks, does not guarantee data return or deletion, and signals that the target is willing to pay again. Nearly 80% of organizations that paid a ransom were targeted by another attack, often by the same group.

And yet in 2025, roughly 37% of ransomware victims paid anyway. When the data is filtered to specific sectors — financial services, healthcare, manufacturing — the payment rates are higher. Some of the largest organizations in the world, with full knowledge of the official advice, with cybersecurity teams and legal counsel and incident response plans, write the check.

The reason paying ransom is counterproductive goes beyond the individual company making the decision. It rewards bad actions not just for bad actors but for the insurance companies that profit from covering those payments. It is the same dynamic as giving a child candy every time they throw a tantrum. The behavior does not stop. It escalates. Every payment teaches the attacker that the tactic works, sets a new price floor for the next demand, and funds the infrastructure for the next campaign against the next victim.

"The $75 million Dark Angels payment in 2024 did not just fund one group. It sent a signal to every criminal operation watching: if you find the right target and apply enough pressure, nine figures is achievable. Every group that raised its demands after that is reading from that playbook."

Yana Ivanov · Security Analyst · SiteWave Studio

The insurance industry's role in the problem

There is a less-discussed participant in the ransom economy who benefits from every payment: the cyber insurance industry. The global cyber insurance market reached $15 billion in 2024 and is projected to hit $29 billion by 2027. A significant driver of that growth is ransomware coverage — policies that reimburse ransom payments and related costs. The industry is, structurally, a funding mechanism for criminal enterprises.

This is not an accusation of bad intent. Insurance companies are pricing risk and covering losses as they are designed to do. But the consequence of making ransom payment the economically rational choice — because insurance covers it — is that payment rates stayed elevated for years after official guidance said stop paying. The actuarial tables made criminals more money than those criminals could have extracted without them.

It gets more specific than that. In at least two confirmed recent cases, attackers stole their victims' cyber insurance policies before making their ransom demand. They read what the coverage limit was and set the demand just below it. The company pays because insurance covers it exactly. The attacker receives the maximum the policy will pay. The insurer pays out. Premiums rise the following year for all policyholders. The cycle repeats. The only question your instinct raises — whether some of this involves deliberate insider collusion — is not paranoia. The FBI has investigated cases where insiders tipped off ransomware groups about a company's vulnerabilities and insurance coverage. It is a documented threat vector, not a conspiracy theory.

The structural problem: No federal government has banned ransomware payments outright. The UK launched a consultation in January 2025 on banning payments for public sector bodies and critical national infrastructure — specifically to make those entities unattractive targets. The US has mandatory reporting requirements coming through CIRCIA but no payment prohibition. The insurance industry, which has the most direct leverage to change company behavior through coverage terms, has largely treated ransom payments as a covered loss. Until the financial incentive to pay is removed, the behavior will continue — and the criminal ecosystem it funds will continue to grow.

The shift that makes backups insufficient

Companies heard the advice about backups. Many implemented it. The criminal ecosystem noticed and adapted. Data theft-only attacks — where criminals steal data and threaten to publish it without encrypting anything — rose from 49% of extortion cases in the first half of 2025 to 65% in the second half. Backups restore your systems. They do not unring the bell of a data leak. When the threat is reputational, regulatory, and legal exposure from stolen customer records, the backup that took three years to implement does not help. The game changed specifically because defenders got better at one part of it. The Vect partnership is designed precisely to exploit this: the threat is publication, not encryption, and the only remedy is payment.

Section 03

Who TeamPCP is — and why "just criminals" is the alarming part

When most people hear about a sophisticated multi-ecosystem supply chain attack that compromised 500,000 corporate identities in nine days, they assume nation-state resources. A government with intelligence assets, years of planning, and unlimited technical capability. The assumption is that only a major adversary could pull this off.

TeamPCP has no confirmed nation-state affiliation. They are, as best as researchers can determine, a financially motivated criminal platform active since approximately July 2025 — less than a year old at the time of the Telnyx attack. Their strength does not come from novel exploits or original tradecraft. It comes from automation. They industrialized known techniques, known misconfigurations, and recycled tooling into a self-propagating criminal infrastructure. The innovation was operational, not technical.

That should be more alarming than a nation-state actor, not less. Nation-state attacks require political authorization, operational security, and geopolitical calculation. A criminal platform running for money has none of those constraints. They move faster, take more risks, adapt more quickly, and have no reason to hold back. And they have demonstrated they do not need sophisticated resources to cause sophisticated damage.

The age of the threat

The FBI reports that the average age of someone arrested for cybercrime is 19 — compared to 37 for any other crime. That is not a statistical anomaly. It is a structural feature of how this threat is developing. 61% of hackers start hacking before the age of 16.

LAPSUS$ — the extortion group now partnered with TeamPCP through the Vect ransomware operation — had at least two teenage members and was led in significant part by an 18-year-old from Oxford who started hacking at age 11. He breached Microsoft, Nvidia, Samsung, Okta, and Uber before he was old enough to rent a car. He was eventually sentenced to an indefinite secure psychiatric facility in the UK. Scattered Spider, which caused hundreds of millions in damage to UK retailers in 2025 and continues operating, was largely composed of teenagers. A 20-year-old in Florida received a ten-year federal sentence and was ordered to pay $13 million in restitution. Some co-conspirators in these cases began offending at 13 or 14.

Your instinct about why young people are drawn to this is supported by the research. The adolescent brain processes risk differently — the prefrontal cortex responsible for long-term consequence evaluation is not fully developed until the mid-twenties. Young people are not making irrational decisions by their own calculus. The thrill of breaking a system is immediate and real. The federal prosecution feels abstract and distant. The UK's National Crime Agency found that financial gain is not even the primary motivator for many young offenders — notoriety, peer recognition, and the intellectual satisfaction of defeating a defended system drive more recruitment than money does.

The recruitment pipeline: Children as young as seven are being pulled into cybercrime in the UK through gaming communities, Discord servers, and online tutorials. Recruitment starts with gaming or shared interests — familiar, social, and low-suspicion. Private servers move young users from casual chats into restricted channels where tools and tasks are shared. Many young recruits do not understand they are breaking the law. Early tasks feel small and low-risk. The path from gaming community to federal crime has no natural warning signs — no moment where an adult catches them and redirects them — because it all happens online, invisibly, in spaces that most parents and schools do not monitor.

The geopolitical wild card

TeamPCP's motivations appear primarily financial. But one element does not fit the pure criminal profile: their wiper payload. While deploying credential-harvesting persistence tools everywhere else, they deployed destructive wiper malware specifically against Iranian infrastructure. Destruction generates no revenue. It serves a geopolitical purpose.

Whether TeamPCP operates under any form of state direction, accepts state-aligned commissions, or made independent targeting decisions cannot be determined from publicly available information. But the pattern — financially motivated criminal infrastructure with selective state-aligned destructive capability — is consistent with a model that several intelligence agencies have documented: criminal groups operating as plausible-deniability proxies for nation-state objectives. The line between mercenary and asset is deliberately blurred. It serves both parties.

Section 04

The AI infrastructure problem — when the tools building AI are compromised

LiteLLM is not just a popular library. It is the routing layer that sits between enterprise applications and every major AI provider — OpenAI, Anthropic, Google, AWS Bedrock. When a company builds an AI-powered product, LiteLLM is often what manages the credentials for all of those connections in one place. Compromising it yields API keys for an organization's entire AI stack simultaneously. One package. Every provider. All credentials. That is why 95 million monthly downloads makes it a more strategically valuable target than Telnyx's one million.

The confirmation nobody wanted to hear

TeamPCP confirmed they used AI — specifically Claude — to write malware components including lateral movement scripts and credential harvesting tools. This compressed their development timeline significantly, allowing them to move from initial compromise to mass distribution faster than traditional attack timelines would permit.

This is not a failure of Claude specifically. It is a demonstration of a structural problem: the same capabilities that make defenders faster make attackers faster. The same tools that generate security testing scripts generate malware. AI platforms do implement content filters — the same filters that block a request to generate a politically charged image will attempt to block malicious code generation. But determined adversaries work around those filters through incremental requests, role-play framing, and decomposition of complex attacks into innocent-looking components that individually clear the filter and collectively constitute the weapon.

Your instinct about behavioral pattern detection is the right direction. A single session asking for a script that reads environment variables looks benign. A pattern of sessions across time — reading environment variables, sending HTTP POST requests, XOR-encoding data, modifying startup folders — is a different picture entirely. The challenge is that AI sessions are mostly stateless. Each conversation starts fresh. The pattern only becomes visible across sessions over time, which requires storing and analyzing conversation history at scale, which raises privacy concerns that the same companies are simultaneously being pressured to minimize. The arms race runs through the privacy debate as well as the technical one.

The recommendation loop

There is a second AI problem that received less attention. AI coding assistants — tools like GitHub Copilot and Cursor that suggest code as developers write — are trained on public code repositories. When a developer asks an AI assistant to help integrate multiple LLM providers, the most common recommendation is LiteLLM, because that is what the training data reflects. The package was the most popular solution when the training data was collected. The AI continues recommending it because popularity is a proxy for quality in its training signal.

This creates a feedback loop that serves attackers. Compromise a popular package. AI assistants trained on code using that package continue recommending it. Developers trust the recommendation. The blast radius extends beyond the direct download window into every project where an AI assistant suggests the compromised dependency. The attack spreads not through developer decision-making but through algorithmic recommendation — and the recommendation persists until training data is updated, which happens on a cycle measured in months, not hours.

Section 05

The gap between TeamPCP and the professionals — and why we are not prepared

Everything documented in this report and in Part 1 was accomplished by a financially motivated criminal group with no confirmed nation-state resources, no embedded infrastructure access, and no geopolitical backing. They have been active for less than a year. They operate primarily through automation and credential chaining. Their innovation was not technical — it was operational discipline applied to known weaknesses that the industry has been warned about for years.

The amateur version of this attack harvested 500,000 corporate identities in nine days. The professional version has been sitting inside the American power grid since 2021.

In my earlier analysis — The Cascade — When America Goes Dark — I documented that Volt Typhoon, the Chinese state-sponsored threat actor, has maintained persistent access inside US power grid infrastructure, water systems, communications networks, and transportation systems since at least 2021, with the FBI confirming some footholds will never be found. Russia's Sandworm has demonstrated the ability to destroy power grid infrastructure on demand — in Ukraine in 2015, 2016, and 2022. Iranian-linked actors executed a simultaneous wipe of 200,000 devices across a US medical technology company in a single coordinated operation in March 2026, as documented in my analysis of The Stryker Wiper Attack.

TeamPCP demonstrated that the supply chain technique — poison a trusted tool, harvest credentials silently, wait — works at scale against a global developer ecosystem. Nation-state actors have the same capability, years of embedded access, and geopolitical triggers that could activate all of it simultaneously. The question is not whether they could execute a supply chain attack against US critical infrastructure. The question is whether they already have and are waiting for the right moment to use it.

"Every foreign policy decision creates a new threat surface. The organizations that will be asked to defend that surface are not in the room when those decisions are made."

Yana Ivanov · Security Analyst · SiteWave Studio

This is not a partisan observation. It is an operational fact with documented consequences. When intelligence-sharing relationships weaken, defenders lose visibility into threats being tracked by allied agencies. When geopolitical relationships deteriorate, adversaries who previously had diplomatic reasons to restrain their cyber operations lose those reasons. The cybersecurity professionals who will be asked to defend the resulting new attack surface are not consulted on the decisions that created it. They inherit the consequences.

My water infrastructure analysis — The Open Tap — When Water Becomes a Weapon — documented that the EPA has no statutory authority to mandate cybersecurity for water utilities, that 70% of utilities failed basic security standards, and that New York City's 9-million-person water system runs on infrastructure with no mandatory cybersecurity requirements. The attack method has been demonstrated in Florida. The regulatory gap is confirmed and current. What has not happened yet is a sophisticated actor targeting a major system with the intent to cause mass casualties. A criminal group running for money has no incentive for that. A nation-state actor positioning for strategic leverage has every incentive.

TeamPCP showed us the technique. The professionals have had years to apply it to targets that matter far more than developer credentials. The question we should be asking is not whether we are ready for the next TeamPCP. It is whether we are ready for the version of this attack that does not announce itself on Telegram.

The preparedness gap: The United States has more documented, confirmed nation-state pre-positioning inside its critical infrastructure than any other country in the world — and a compliance framework still in Phase 1 rollout, a cyber insurance industry that structurally funds criminal behavior, a regulatory environment that leaves water utilities unprotected by law, and a developer ecosystem whose security depends on volunteers and nonprofit infrastructure. TeamPCP is not the threat that should keep us up at night. TeamPCP is the demonstration of what the real threats are capable of.

Section 06

What actually changes the playing field

The standard conclusion to a threat analysis is a list of technical controls. Those controls are documented in Part 1 and they matter. This section is about the structural changes that technical controls cannot address on their own — the systemic problems that create the environment in which these attacks succeed.

01
Remove the financial incentive to pay ransoms

The UK's proposal to ban ransomware payments for public sector bodies and critical infrastructure is the right model. If the target will not pay, the target becomes less valuable. Insurance coverage for ransom payments sustains the criminal economy. Insurers have the leverage to require that payment only happen as a documented last resort after all recovery options are exhausted — and to price policies that reward organizations demonstrating recovery capability without payment. That leverage has not been used. It should be. In the absence of voluntary change, regulatory requirements are the correct response.

02
Build the defender talent pipeline where the attackers are building theirs

The average cybercriminal is 19 years old. 61% start before 16. They are recruited through gaming communities, Discord servers, and online tutorials — spaces where the skills that make great attackers are already being developed, with no ethical framework. The cybersecurity defender community has a talent crisis and a pipeline problem simultaneously. The solution is not more university degree programs aimed at 22-year-olds. It is meeting young people in the spaces where they are already developing these skills and offering a legitimate, rewarding path before the criminal ecosystem finds them first. Competitive gaming communities and esports organizations represent exactly this talent pool — young people with systems thinking, pattern recognition, and the obsessive iteration mindset that makes a great analyst or red teamer. The pipeline cuts both ways. The question is which direction it flows.

03
Treat open-source critical infrastructure as what it is

PyPI is critical infrastructure for American defense software. It is maintained largely by volunteers and a nonprofit. The same is true for npm, for dozens of security tools, for the entire open-source ecosystem that underlies modern software development. The defense contractors, financial institutions, and intelligence agencies that depend on this infrastructure have a direct interest in its security that they have not acted on proportionally. Government-funded security audits, mandatory software bill of materials requirements, and private sector investment in open-source security maintenance are not charity. They are self-interest at scale.

04
Integrate cybersecurity consequence modeling into geopolitical decision-making

Every foreign policy decision that creates a new adversary creates a new attack surface. The cybersecurity professionals who will be asked to defend that surface are not in the room when those decisions are made. This is not an argument about which foreign policy decisions are correct. It is an argument that cyber threat consequence modeling should be a standard component of strategic decision-making — the same rigor applied to military and economic consequences applied to the cyber threat environment those decisions create. That requires cybersecurity expertise at the policy level, not just the operational level.

05
Accept that AI is a dual-use weapon and invest accordingly in behavioral detection

TeamPCP used AI to write malware. Defenders should use AI to detect the behavioral patterns that malware generation and deployment produce across sessions over time. The challenge is building cross-session behavioral analysis without creating a surveillance architecture that violates the privacy expectations users have. The arms race runs through the privacy debate as well as the technical one. Getting the balance right is harder than the technical implementation. It is also necessary, because the alternative — treating every session in isolation — means the pattern that would identify the attack is never visible until after the damage is done.

This is Part 2 of a two-part analysis. Part 1 — The Trusted Channel covers the Telnyx attack technically: the nine-day credential chain, WAV steganography, defense contractor exposure, IOCs, and remediation. A third analysis examining the talent pipeline from gaming culture to cybercrime — and what closing that gap genuinely requires — is forthcoming.

YI
Yana Ivanov
Security Analyst  ·  CMMC Compliance Analyst  ·  SiteWave Studio

Yana Ivanov is a security analyst and CMMC consultant based in Connecticut, specializing in cybersecurity risk assessment for defense contractors in the Connecticut defense industrial base. With 15 years of enterprise technology experience and an MS in Information Systems, she brings a practitioner perspective to threat intelligence analysis. She is currently pursuing CompTIA Security+ and CMMC Registered Practitioner certification, with a focus on helping defense supply chain companies achieve genuine — not checkbox — security compliance. This analysis was produced independently as a contribution to the security community's understanding of active threats against US defense infrastructure.

Portfolio