This content was originally published by SC Media and can be viewed here
On January 12, 2026, BleepingComputer reported that approximately 860GB of Target's source code and internal developer documentation appeared online. By January 13, multiple current and former employees confirmed the leaked materials were authentic.
The stolen data included internal technology stacks, CI/CD pipelines, Hadoop datasets, proprietary service names, and metadata containing internal engineer names. In total, roughly 57,000 file and directory names were exposed.
According to threat intelligence researchers, the breach likely began with an infostealer attack on a Target employee workstation in late September 2025. That workstation had extensive access to internal services, including IAM, Confluence, Jira, and internal wikis. From there, the attacker had a roadmap. And for roughly three to four months, they used it, exfiltrating 860GB of source code and documentation before anyone noticed.
Target's response was swift once the breach was discovered: they took their Git server offline and restricted access to the corporate VPN only. Internal memos indicated an "accelerated" change to access controls, suggesting the repositories may have been misconfigured and potentially accessible from the internet, even if behind authentication.
But by then, the damage was done. When looking at this breach, I see four distinct failures, each one preventable with the right visibility.
Four Failures, One Blind Spot
Failure #1: The initial compromise went undetected.
An employee workstation was infected with infostealer malware. The attacker gained access to credentials and session tokens. But here's the question: when those credentials started being used from an unusual location or at unusual times, did anyone notice?
Geographic anomalies and impossible travel patterns are early warning signs of compromised credentials. If an identity is logging in from two locations 5,000 miles apart within an hour, that's not a developer working from home. That's an attacker.
Failure #2: Mass cloning activity went unnoticed.
860GB is not a quick download. That's sustained, bulk cloning activity across thousands of files over weeks or months. Someone was accessing repositories they likely had never touched before, downloading everything.
Behavioral baselines matter. When an identity suddenly starts cloning repositories they've never accessed, especially dormant or sensitive ones, that's a signal. A strong one. Combined with other anomalies, it's a threat.
Failure #3: The Git server was misconfigured.
Reports suggest Target's Git server may have been accessible from the public internet prior to the breach. Even with authentication, that's a posture problem. Development infrastructure should be locked down with the same rigor as production systems.
Misconfigured SCM tools, overly permissive CI/CD pipelines, exposed artifact repositories: these are the entry points attackers look for. And most organizations don't monitor them continuously.
Failure #4: Privileged access persisted for months.
The compromised workstation had "extensive access" to IAM, Confluence, Jira, and internal wikis. That's a lot of privilege for a single identity. And those credentials were used for months without triggering any alarms.
Over-privileged accounts and stale credentials are the bread and butter of attackers. If no one is reviewing who has access to what and whether that access is still appropriate, you're handing attackers the keys.
The Common Thread: Identity
Each of these failures has one thing in common: they're not about code vulnerabilities. They're about identity. The attacker didn't exploit a CVE. They exploited a compromised identity with too much access, in an environment with too little visibility, for way too long.
Traditional security tools like SAST, SCA, and DAST scan code for vulnerabilities. They're essential, but they wouldn't have caught this. You can't scan your way out of compromised credentials. You can't patch your way out of an insider threat.
What you need is visibility into identity behavior across your development environment that can answer: who is accessing what? When? From where? Is this normal for them? And when multiple signals converge (unusual location, bulk downloads, access to unfamiliar repos) you need to catch it before 860GB walks out the door.
The Questions Every Security Team Should Ask
If this happened in your organization today, could you answer these questions within hours?
- Which identity exfiltrated the data?
- Was it an internal or external developer, a service account, or an AI agent? When did the unusual behavior start?
- What else did they access?
- Were there warning signs like geographic anomalies, bulk cloning, or unusual repo access that went undetected?
- Which systems were misconfigured?
- Which identities were over-privileged?
Security experts warn that exposed source code provides attackers with a roadmap, including hardcoded secrets, architectural logic, and API structures, that can fuel future attacks. And the exposure of internal engineer names and project details creates prime targets for spear phishing.
Target's breach isn't over. It's just entered a new phase.
For everyone else, it's a wake-up call. The development environment is the new attack surface. And identity is the control plane.





