When I was coming up the security field, I spent some time focused on network penetration testing. I loved running around in the air ducts of networks in the mid 2000s. It was the golden age of software exploitation when organizations had tons of computers running software that didn't even try to defend itself.
At that point in time, there were a number of high-value targets (HVTs) that everyone automatically went after. Most organizations had heterogenous networks but relied on Windows systems for users. As such, HVTs for those organizations were most often: domain administrator accounts, domain controllers, SQL Server system administrator accounts and a grip of other Windows-centric security lore that mostly still causes trouble today. This was a time when just running a vulnerability scanner or even a port scanner could give a decent pentester a quick path to success, if it didn’t knock something over in the process.
A lot has changed in the past 20 years across software, security and business. Almost every company produces its own software, and it is developers who write, test and debug the code that is counted on to achieve technology innovation and business success. This has significantly increased the value of developers not only in the eyes of startups and big tech organizations, but also in the mind's eye of attackers. Developers have the AWS, SSH and GPG keys, often the signing keys, and unfettered access to build infrastructure and source code version control systems. These are the new keys to the kingdom; The intellectual property lifeblood of the organization. This makes developers the new HVTs and the shining target for software supply chain attackers.
The open-source ecosystem is the new battleground
Phylum has written several times before about how we automate detection of malicious packages. We’re incredibly proud of it; it’s the engine that has allowed us to detect tens of thousands of malicious open-source libraries. As we analyze the intent behind these malicious libraries, a trend doesn’t emerge; it punches the viewer in the face: Our findings show that 99% of malicious open-source packages are designed to attack developer workstations and CI/CD build agents.
Developers are not the problem, they just want to do their job
There is an oft-spouted theory in the zeitgeist that “developers hate security tools.” It’s often followed up with a quip about how “security tools are too complex”. I couldn’t disagree more.
First, most software security tools hypothesized for developer use have almost completely missed the mark in understanding the challenges of the end-user: a software developer focused on building software, not a researcher that gets to explore the possibilities of software security. As an industry, we’ve often done a terrible job having any sort of empathy for developers, and instead look at problems in a binary context of right and wrong from the perch of theoretical software security.
Second, the idea the security tools are too complex for developers is just belligerently wrong. Git is monumentally more complex than any security tool I can think of, yet even new developers use it successfully everyday to get their work done. I’ve personally misused Git in a way that didn’t just shoot myself in the foot; I’ve needed therapy and a shaman to come back from those sins. This doesn’t happen frequently to most developers because it fits into a workflow that the huge majority can use consistently without having to think deeply about the complexity of the tool.
A by-product of this lack of developer empathy is the friction between appsec or prodsec teams tasked with defense, and the development teams building software as fast as they can. This friction is being used by attackers to gain an advantage in targeting developers directly.
Because of this dynamic, attackers don’t even need to exert themselves to execute an attack. Advanced malware development techniques such as threat actor impersonation, process diffusion, or even AV detection need not apply: For example, we recently identified an active typosquatting campaign targeting NPM developers. The execution is dead simple: attackers create a malicious package that mimics legitimate open-source packages, but with a minor name change that might appear like a harmless typo to a developer. This attack has zero financial cost, is trivially scriptable and can be done on a massive scale. In this case, the legitimate packages account for just over 1.2 Billion (1,204,473,993) downloads per week - a gigantic attack surface targeting a huge number of developers! Thankfully, Phylum detected the attack and reported the malicious packages quickly so they could be removed. But there’s no telling how many developers may have been affected even in the span of minutes that these packages were live.
Last month, our team found a set of malicious packages that were so badly executed, we had to report on it simply for the sake of how terrible it was. One must try to find humor in life and reading the source code of some of these malicious packages can definitely crack a smile. As we noted in the findings “The reality is that it is less work to distribute a handful of malicious packages than it is to perform a security review of each published package. Without a priori knowledge of which packages are malicious, the level of effort required to be successful is disproportionately in favor of the malicious actors. And so, we must remain vigilant against all packages if we do not want to fall prey to even the most amateur malware authors.”
Empathetic congruence with reasonable development practices should be the aspiration of software security tooling. I like to think we’re doing our part at Phylum to focus on developer experience and usability, while providing serious security controls for the software supply chain. We are a team of developers helping to protect developers, because the key to securing software supply chains starts with security teams and developers operating and maintaining trusted relationships that balance security with usability.