The wrapper changed. The vulnerability didn't.
If you remember the sound of a dial-up modem connecting, you remember what the early internet felt like — exciting, slightly lawless, and new enough that nobody had written the rules yet. Napster lived in that era. We knew what we were doing existed somewhere in a gray area — downloading files from strangers on the other side of the world whose names we'd never known. We justified it easily enough. We were young, we didn't have money, the music was right there, and everyone around us was doing the same thing. Nobody stopped to ask what else might be riding along in those files. The trust was in the platform, the familiarity, the sheer volume of other people doing it. That felt like enough.
Twenty-five years later, a threat actor called TeamPCP hid malware inside an audio file and delivered it through one of the most trusted software repositories in the world. The file was called ringtone.wav. The platform served over 300 billion downloads a year. Millions of developers used it daily without a second thought — because everyone around them was doing the same thing.
The wrapper changed. The vulnerability didn't.
This report examines how TeamPCP executed a nine-day credential-chaining campaign ending with the Telnyx Python package being poisoned on March 27, 2026 — and what it reveals about a structural trust problem that runs from open-source repositories straight through to the defense contractors building America's next generation of weapons systems.
What Telnyx is — and why that matters
Telnyx is not a consumer product. Most people have never heard of it and that is by design — it is invisible infrastructure. It is a software development kit that developers drop into their applications to add phone call, SMS, and voice features without building that plumbing from scratch. You use an app that lets you call customer support with one tap or receive a verification code by text — something like Telnyx is probably behind it.
That invisibility is precisely what made it a target. Telnyx gets installed once and forgotten. It sits in a project's dependency list alongside dozens of other libraries, all trusted implicitly, none audited individually. A developer at a medical device company, a defense contractor, a financial institution does not install it because they evaluated it against alternatives. They install it because it has good documentation, a million downloads a month, and the team lead used it at the last job. Familiarity is the entire credential check.
PyPI — the Python Package Index — is where it lives. Think of it as an app store for developers, except it hosts over 500,000 packages and serves more than 300 billion downloads a year. It is critical infrastructure for essentially all modern software development, relied upon by defense contractors, hospitals, intelligence agencies, and financial institutions. It is also built almost entirely on trust. Anyone with a valid account and a publishing credential can push a new version of a package. The platform authenticates the sender. It does not inspect what was sent.
Who depends on PyPI: Python is the primary language for AI development, data analysis, and automation tooling across the defense industrial base. Electric Boat, Raytheon, Leidos, Pratt & Whitney — their software teams pull from PyPI daily. So do developers at every medical device company, every hospital system, every intelligence contractor. The supply chain for American defense software runs through a nonprofit repository maintained largely by volunteers.
Nine days. Three dominoes. One credential chain.
The Telnyx compromise did not begin with Telnyx. It was the final link in a nine-day campaign where each successful attack handed TeamPCP the credentials to execute the next one. To understand how that chain worked, it helps to understand what CI/CD pipelines are — because that is where the credentials lived.
CI/CD stands for Continuous Integration / Continuous Deployment. It is the automated system that runs whenever a developer pushes new code: it tests the software, builds it, and publishes new versions to places like PyPI. These pipelines run automatically and to do their job they store credentials — publishing tokens, API keys, access secrets — as environment variables. Whoever holds those credentials can publish packages as if they were the legitimate author. TeamPCP's strategy was built entirely around that single fact.
Trivy is an open-source vulnerability scanner by Aqua Security, used in thousands of CI/CD pipelines to check for security flaws before code ships to production. TeamPCP force-pushed malicious binaries to 75 of 77 Trivy GitHub Action tags. Every pipeline running Trivy without a pinned version silently downloaded the backdoor and executed it with full pipeline access. npm tokens, Docker credentials, and PyPI publishing tokens were swept from every victim. By end of day, 44 Aqua Security GitHub repositories had been renamed with the prefix tpcp-docs- — a taunt, and a calling card.
Using npm tokens stolen from Trivy victims, TeamPCP deployed CanisterWorm — a self-propagating backdoor that automated the next wave entirely. Given one stolen publishing token, it enumerated every package that token could publish to, bumped version numbers to appear as routine updates, and pushed malicious releases across entire package scopes. The full sweep completed in under 60 seconds. Every developer who ran a standard install command received the backdoor as part of what looked like a legitimate update.
LiteLLM is a widely used Python library for developers building applications on top of AI models. TeamPCP poisoned it using PyPI tokens from Trivy victims. The harvester swept environment variables, .env files, and shell history from every system that imported it. Any developer or CI/CD pipeline with both LiteLLM installed and access to a Telnyx publishing token had already handed it over — without knowing. Researchers at Endor Labs: "the three-day gap fits the time needed to sift through stolen credentials and pick the next target."
Using the Telnyx PyPI token harvested from LiteLLM victims, TeamPCP published versions 4.87.1 and 4.87.2. The first had a typo that broke the payload — corrected 16 minutes later. Both packages carried a delivery mechanism that had not appeared in this campaign before: the payload was not embedded in the package at all. It was hidden inside WAV audio files fetched live from a command-and-control server, extracted in memory, and executed silently. By 10:13 UTC, PyPI had quarantined both versions. The window was six hours.
The pattern: TeamPCP never needed to breach Telnyx directly. No vulnerability in Telnyx's code was exploited. They held a valid publishing credential — stolen three steps back from a security scanner that thousands of pipelines trusted. The attack required no sophistication at the final stage because all the work had already been done upstream.
Hiding malware in sound
The first question most people ask when they hear about this attack is the same one I had: who still uses WAV files? The format feels dated — large, uncompressed, something from a recording studio or an old system alert. The answer is that WAV is used extensively in professional audio, broadcast, game development, medical equipment, and defense systems. But more to the point: a developer tool playing a notification tone uses WAV. It is unremarkable. Expected. Nobody questions a software package that fetches an audio file.
That is exactly why TeamPCP chose it.
What steganography means in plain terms
Steganography is hiding information inside something that does not look like it contains information. It is different from encryption — encryption scrambles data so it cannot be read; steganography conceals it so no one knows it exists at all. A WAV file is made of audio frames — data blocks that represent sound. TeamPCP packed executable malware into those frame bytes. The file still had a valid WAV header. Any tool checking whether it was a legitimate WAV confirmed it was. The malware sat where the audio samples should have been.
An outbound HTTP request for a file called ringtone.wav from a package that had just been imported.
A valid WAV file returned — correct headers, correct structure, no executable content, no known malware signature.
Network monitoring logged an audio download and moved on.
The WAV frames contained no audio. The bytes were an XOR-obfuscated payload extracted in memory using Python's built-in wave module.
On Windows: a persistent dropper placed in the Startup folder, executing on every reboot. On Linux/macOS: credentials swept and exfiltrated encrypted to the attacker's server.
No executable written to disk. No recognizable file. Minimal forensic trace.
Why fetching the payload live changes everything
TeamPCP's earlier attacks embedded payloads directly inside packages — static code that scanners eventually learned to flag. This version fetched the payload at runtime from a live server. The package itself contained almost nothing suspicious. Static analysis found little to flag. And because the payload lived on the attacker's server, TeamPCP retained full control after publication: they could update it, swap it, serve different malware based on operating system, or go silent without touching the package at all.
The weapon lived on the server. The package was just the trigger.
This is a direct iteration on how the previous generation of attacks got caught. In my earlier analysis of the Glassworm campaign — The Invisible Threat — the payload was embedded directly in source code using invisible Unicode characters. Scanners eventually learned to detect that pattern. TeamPCP's response was to remove the payload from the package entirely. The detection gap that caught the previous technique was studied and engineered around. The attackers are reading the same security disclosures as the defenders.
The question everyone asks — and why it misses the point
The instinctive response to this attack is reasonable: wouldn't developers be smart enough to catch it? They are technical people who read code and understand how systems work. Targeting them seems risky.
It is a reasonable instinct. It is also based on a false assumption — that developers had the opportunity to look.
When a developer wants to verify a package before installing it, they go to GitHub — the source code repository where the human-readable version lives. The GitHub source for Telnyx was completely clean. The poisoning existed only in the compiled artifact uploaded to PyPI: the thing developers actually downloaded, not the thing they could inspect. The one place a cautious developer would look showed them nothing wrong.
-
Critical
The source was clean — only the artifact was poisoned
Neither version 4.87.1 nor 4.87.2 had a corresponding GitHub release tag — the one technical tell — but most automated dependency managers do not check for this. Developers who did the right thing found nothing wrong because the compromise happened at the delivery layer, not the source layer.
-
Critical
The malware ran the instant the library was imported
The malicious code was injected into telnyx/_client.py — the core file that loads when any Python application imports the library. No install hook to block, no suspicious script to review. Simply having import telnyx in an application was enough. In production environments, this means the malware activated on every server restart, every CI/CD build, every test run — potentially across dozens of machines in parallel during the six-hour window.
-
High
Professional environments train people to trust the channel
In practice, developers trust their tools the same way anyone trusts a professional environment. A designer installs a plugin because it solves a problem. A developer imports a library because the team has used it for two years. In my own experience across enterprise environments, I was never once asked to verify what a third-party tool had access to before using it. No process existed for it. That is not negligence — it is how professional software development actually works at most organizations, including the ones that should know better.
The developer was not the target. The developer was the door. TeamPCP was not after personal files. They wanted what lives on a developer's machine at a defense contractor or medical company: cloud credentials, database tokens, GitHub access to internal repositories, API keys for production systems. The developer's workstation is the path to the infrastructure behind them — and it is typically the least monitored machine on the network, sitting in a gray zone between user endpoints and production servers.
The defense contractor gap — what CMMC doesn't cover
Defense contractors handle this class of risk every day. The question of whether they adequately restrict what developers can install is one most organizations would prefer not to answer in writing.
On paper, the framework exists. CMMC Level 2 — now contractually mandatory for the 220,000+ contractors handling Controlled Unclassified Information — requires compliance with NIST SP 800-171 supply chain controls. Updated NIST guidance has introduced requirements for centralized component inventories, which amounts to a software bill of materials mandate. Lockheed Martin and Boeing are already requiring suppliers to document their compliance posture. The language is clear and the enforcement is real.
The reality on the ground is more complicated. In my analysis of the CMMC compliance landscape — The Weakest Link — I documented that only 1% of defense contractors are currently audit-ready, a figure that has been declining as enforcement has increased. The barriers are structural: cost, understaffed IT teams, a compliance culture that treats certification as a checkbox rather than an actual security posture. A 50-person precision machining subcontractor handling technical drawings for an Electric Boat submarine program is not running a private PyPI mirror with vetted, approved packages. Their developers are pulling from the public registry the same way everyone else does.
More specifically: CMMC tells contractors what to protect. It does not tell developers which packages they are allowed to use. There is no approved library list. No package firewall. No process that intercepts a pip install telnyx command and asks whether this version has been verified against the GitHub source. The controls focus on protecting CUI after it enters the environment — not on what happens on a developer's workstation during an automated build at 3:51 in the morning.
That gap — between "you must protect your supply chain" and "your developer just imported malware during an automated build" — is exactly where TeamPCP operated.
The scenario: A developer at a Connecticut defense subcontractor is building an application using Telnyx for voice features. Their CI/CD pipeline runs a build at 4am. It imports the Telnyx library. The malware activates silently, harvests every credential on that machine — including a GitHub token with access to the repository where the program's source code lives — and exfiltrates it encrypted to a server in Eastern Europe. The developer starts work at 9am and notices nothing. The CMMC audit is scheduled for November.
What should have been in place
Writing telnyx without a version number means the package manager downloads whatever is currently listed as latest — including any malicious update pushed with valid credentials. Pinning to an exact version is a one-line change that blocks silent updates. It is standard practice in security-conscious organizations and absent in most others.
This attack began by compromising Trivy — a security scanner built to protect CI/CD pipelines from supply chain attacks. Organizations running Trivy without pinning its version were trusting a backdoored tool. The assumption that security tools are inherently trustworthy is exactly what TeamPCP exploited first. Every tool in a development environment, including the security tooling, must be version-pinned and verified.
The Trivy backdoor harvested credentials by reading environment variables — the standard location for publishing tokens in CI/CD pipelines. A dedicated secrets vault with scoped, time-limited tokens means a single compromised tool cannot access credentials for unrelated systems. Most development teams store all credentials as plain environment variables readable by any process in the pipeline.
Without an SBOM, organizations cannot quickly determine whether a compromised package version exists anywhere in their environment when a supply chain attack is disclosed. Incident response that should take minutes takes hours of manual searching across projects. CMMC and updated NIST 800-172 guidance push toward SBOM requirements. This attack is the practical illustration of why that matters.
The malware made an outbound HTTP request to a raw IP address on a non-standard port to fetch its audio payload. That connection is anomalous — a legitimate package import does not call home to 83.142.209.203:8080. Most organizations monitor production servers closely and build pipelines loosely. The pipelines hold the credentials. They deserve the same level of scrutiny.
MITRE ATT&CK mapping
| Technique ID | Technique | Severity | How it appeared in this attack |
|---|---|---|---|
| T1195.001 | Supply Chain Compromise — Software Dependencies | Critical | Core technique. Legitimate packages poisoned by publishing malicious versions under valid stolen credentials. Downstream developers unknowingly installed malware as part of a routine dependency update. |
| T1001.002 | Data Obfuscation — Steganography | Critical | Malware payload encoded inside WAV audio frame data using XOR obfuscation, disguised as audio to bypass network inspection and endpoint security tools that do not flag audio file downloads as suspicious. |
| T1552.001 | Unsecured Credentials — Credentials in Files | Critical | Harvester specifically targeted .env files and shell history — standard locations where developers store API keys, database passwords, and service tokens in plaintext without additional protection. |
| T1027 | Obfuscated Files or Information | High | Second-stage payload XOR-encoded inside WAV frames and executed in memory without being written to disk as a recognizable executable, evading static file scanning and complicating forensic recovery. |
| T1041 | Exfiltration Over C2 Channel | High | Stolen credentials encrypted with AES-256-CBC and RSA-4096 then sent via HTTP POST to 83.142.209.203:8080 — the same server used to deliver the initial payload, doubling as both delivery and exfiltration infrastructure. |
| T1547.001 | Boot Autostart — Startup Folder | High | On Windows, malware dropped as msbuild.exe into the Startup folder, executing automatically on every system restart and establishing persistent access beyond the initial six-hour exposure window. |
Indicators of compromise and immediate actions
Any environment that imported telnyx between 03:51 and 10:13 UTC on March 27, 2026 should be treated as fully compromised. The malware executed at import time — meaning it ran on every system start, every build, every test run during that window, potentially across dozens of machines simultaneously.
// Published 03:51–04:07 UTC March 27, 2026. Quarantined by PyPI. Last clean version: 4.87.0
// Block immediately. Any historical connection to this IP confirms active compromise.
ringtone.wav // Linux/macOS credential harvester
// Delete if found. Check for accompanying .lock file in same directory.
// TeamPCP signature in HTTP POST headers. Search proxy and network logs.
How it was caught, what was taken, and what comes next
Who discovered it — and how
The compromise was not caught by PyPI's own systems, nor by Telnyx, nor by any developer who inspected the package. It was caught by private application security firms — specifically Socket and Endor Labs, who published their findings on March 27, with Aikido Security and Wiz (now part of Google Cloud) independently reaching the same conclusion within hours.
The detection method is worth understanding because it is replicable. The tell was a version mismatch between two places: the PyPI registry showed versions 4.87.1 and 4.87.2 as the latest releases, but the GitHub source repository had no corresponding release tags for either version. Version 4.87.0 had a tag. These two did not. That gap — a package that exists in the distribution channel but not in the source repository — is the fingerprint of a compromised publishing credential. Someone pushed to PyPI using a stolen token, not through a normal development and release process. Automated monitoring tools that compare PyPI publication history against GitHub release tags flagged the anomaly within hours.
This is a detection method that requires no advanced tooling. It requires only the discipline to check that what is published matches what was built.
What TeamPCP actually has
The scope of what the harvester collected goes significantly beyond "some credentials." On any Linux or macOS machine that imported the malicious package, the malware collected: SSH private keys and configurations, cloud provider credentials for AWS, Azure, and GCP, authentication tokens for Docker, npm, Git, and Vault, database connection strings, all environment variables and .env files containing embedded API keys and tokens, full shell and database command history, and cryptocurrency wallet data.
The worst-case scenario was Kubernetes. If a service account token existed anywhere on the compromised machine, the malware escalated beyond the workstation entirely — enumerating cluster secrets and deploying privileged pods to every node in kube-system, each mounting the host root filesystem. A developer's compromised laptop becomes a foothold inside the entire container infrastructure of the organization they work for.
On Windows the impact is different but persistent. The dropped executable in the Startup folder means those machines continue beaconing to TeamPCP's infrastructure on every reboot — even after the malicious package is removed. Removing the package does not remove the persistence implant. These are two separate remediation steps.
This is no longer just a credential theft operation. TeamPCP has formally partnered with two other criminal groups to scale exploitation of the stolen credentials before victims complete remediation.
Vect is an emerging ransomware-as-a-service operation — a model where a core criminal group builds the ransomware tool and then distributes it to a network of affiliates who carry out the actual attacks in exchange for a cut of the ransom. TeamPCP and Vect announced on BreachForums that all 300,000 registered forum users would receive personal Vect affiliate keys. The ransomware ecosystem will remember this for its scale: LockBit, the previous benchmark for ransomware operations, had only ever opened 73 affiliate accounts. TeamPCP just handed keys to 300,000 people.
LAPSUS$ is an international extortion group that became notorious between 2021 and 2022 for breaching Microsoft, Nvidia, Samsung, Okta, and Uber — not through sophisticated malware, but through social engineering, SIM swapping, and insider recruitment. What made them infamous is who they turned out to be: teenagers. At least two core members were under 18. One founding member was sentenced indefinitely to a secure psychiatric facility in the UK. Despite this, they successfully compromised some of the most defended organizations in the world. LAPSUS$ has already publicly claimed a 3GB breach of AstraZeneca using credentials from the TeamPCP campaign — internal code repositories, cloud infrastructure configurations, GitHub Enterprise data, and employee records.
The credentials stolen from Telnyx victims are not sitting dormant. They are being actively distributed across a criminal network of unprecedented scale — right now.
Should affected companies disclose — and to whom
This is not only an ethical question. In most jurisdictions it is a legal obligation with specific timelines, and missing those timelines carries regulatory consequences that can dwarf the cost of the breach itself.
The answer depends on what data was accessible from the compromised environment. If harvested credentials provided access to systems containing personal data — customer records, employee information, healthcare data, financial data — breach notification laws almost certainly apply. For organizations in or serving customers across key jurisdictions:
| Jurisdiction / Framework | Applies to | Deadline | Notify |
|---|---|---|---|
| Connecticut | Orgs with CT residents' data | 60 days | Affected individuals + Attorney General |
| New York / California | Orgs with NY/CA residents' data | 30 days | Affected individuals + state AG |
| GDPR | Any org processing EU personal data | 72 hours | Supervisory authority; individuals if high risk |
| SEC | US public companies | 4 business days | SEC Form 8-K if incident is material |
| HIPAA | Healthcare organizations | 60 days | HHS + affected individuals |
| CIRCIA / DoD | Critical infrastructure + defense contractors | 72 hours | CISA + DoD Cyber Crime Center (DFARS 252.204-7012) |
A common misconception is that these obligations only apply if personal data was directly stolen. That is too narrow. If credentials were harvested that provided access to systems containing personal data, and those credentials have now been distributed to a ransomware affiliate network, the breach of downstream systems may already have occurred or may be imminent. The notification clock does not start when the ransomware deploys. It starts when you have reason to believe unauthorized access to personal data was made possible.
For defense contractors specifically: a compromised developer machine that held credentials for systems containing Controlled Unclassified Information triggers a reporting obligation to the DoD Cyber Crime Center within 72 hours of discovery under DFARS 252.204-7012. This is separate from, and in addition to, any state breach notification requirements.
The practical answer: If your organization imported telnyx 4.87.1 or 4.87.2, do not wait for a ransomware deployment to trigger your incident response plan. Engage legal counsel immediately. The question is not whether you need to disclose — it is which obligations apply, in which sequence, on which timeline. That determination needs to happen now, not after the next breach disclosure in your environment confirms the stolen credentials were used.
This report is Part 1 of a two-part analysis. The full scale of the TeamPCP campaign — 500,000+ stolen corporate identities, the ransomware payment paradox, the AI infrastructure problem, and who bears responsibility — is examined in Part 2: The Blast Radius. The questions of whether companies should pay ransoms, how criminal groups like Vect and LAPSUS$ form and collaborate, and what it means that the tools building AI are now actively compromised are addressed there in full.