For a week now, the Internet has been buzzing with a Bloomberg story reporting a massive hack that would have impacted some 30 well known organizations, from Amazon to Apple, as well as U.S. government agencies. It is important to mention up front that all companies mentioned in the report have vehemently denied the reported facts, so at this point, we are in a gray area, between denials from those entities and confirmation from Bloomberg. Yet, true or not, the story has the merit to put some key security questions on the table about continuous security in the CI/CD process and the industry as a whole.
“The Big Hack”
So what does the Bloomberg story say? Three years ago, Amazon was performing due diligence in order to acquire a company specialized in video streaming and compression, Elemental Technologies, LLC. As part of this process, Amazon decided to hire a security firm to take a closer look at Elementals’ technology. As part of their investigation, the security firm decided to take a close look at the servers used by Elemental; these servers include a motherboard manufactured in China by a U.S. company, Super Micro Computer, Inc., one of the world’s biggest suppliers of server motherboards. Within their analysis, they decided to compare the motherboard design documents to the actual hardware from Elemental Technologies and discovered a tiny difference: “a tiny microchip, not much bigger than a grain of rice, that wasn’t part of the boards’ original design.” Amazon raised their discovery to U.S. authorities, and the ensuing investigation showed that some hacking organization had been able to create a breach in Super Micro Computer’s supply chain, and modify the process so as to add a microchip that could later be remotely triggered, hence permitting it to hack into thousands of machines, with little visibility.
“...a tiny microchip, not much bigger than a grain of rice, that wasn’t part of the boards’ original design.” -- Bloomberg Businessweek, The Big Hack: How China Used a Tiny Chip to Infiltrate U.S. Companies
Again, nothing has been proven at this point, and all stakeholders have energetically denied Bloomberg’s reporting. But the story is interesting because even if that specific occurrence didn’t happen, the reported use case is not akin to some kind of science fiction movie, it is very plausible. So plausible, that the first reaction of most cybersecurity experts in the field hasn’t been “How is this even possible?!?” but rather “Why would the hackers bother hacking the manufacturing process when the same could be done by hacking the chipset design itself, hence leaving nothing visible to a “physical” inspection...”
Integrity of the hardware supply chain
To many, this served as a big wake-up call: the integrity of your hardware supply chain is paramount to security. Why bother installing firewalls and sophisticated intrusion detection software if professional hackers can pretty much magically bypass those and log into your systems thanks to a hardware pin the size of a grain of rice?
Most IT professionals feel completely powerless in such a fight. Who has the resources, time and expertise to conduct such in-depth reviews? And this is not just about your motherboard supply chain and design documents, we are talking about all elements making their way into your data centers (and beyond! What about your company phones? Laptops?).
The reality is that increasingly so, only players with massive critical mass will have the means to conduct such in-depth security reviews? Who are they? Well, for the most part, public cloud operators (IaaS)!
The cloud and security
Today, pretty much only cloud vendors have the critical mass that makes it possible to justify such investments (and also because they understand what’s at stake in not doing it). Yet, a lot of organizations I am meeting with still use that specific argument of security as a reason NOT to use the cloud. In the light of incidents like these, it is mind boggling.
Frequently, when I meet with a company that shares this view, I ask them how many full-time employees (FTEs) they have dedicated exclusively on cybersecurity research. Answers typically fall in the low single digits. As a follow-up, I ask how many of those are exclusively focused on inspecting chip designs for security flaws? The answer to this question has always been 0. Plus a pair of eyebrows raised. I then talk about how Google’s Project Zero was able to identity numerous critical security flaws deep inside the CPU architecture, even going as far as evaluating the behavior of the highly complex “speculative execution” CPU engine.
As we enter this new phase of IT industrialization, only massive infrastructure players have the critical mass to play a serious role in providing the level of scrutiny required to limit these type of risks. Cloud simply is your best option when it comes to security.
What about your SOFTWARE supply chain?
The environment
While discussions around “The Big Hack” focused on the integrity of the hardware supply chain, the same type of questions can be asked about the other side of the same coin: the integrity of the software supply chain. Its integrity is as critical in making sure you are not exposed to embedded backdoors and other threats.
A lot of organizations put quite a bit of effort in making sure that their vendors fulfill their obligations. From operating systems to application performance management (APM) and monitoring tools, any software that is tightly integrated in your software stack has the ability to do a lot of damage if its integrity gets compromised. You essentially have to trust your vendor to do the right thing (for this, larger organizations will typically do a due diligence of the vendor’s engineering processes as well as impose strong legal language to avoid any misunderstanding).
It is a must to make sure your providers provide you with the bits you are expecting, but what about your own bits? I.e. the software your organization produces?
YOUR software is also at risk
In places where continuous information/continuous delivery (CI/CD) has not caught up yet, the risk of a hack being introduced in your software is very hard to scope, hence to prevent: pretty much anybody that touches the software, source code or binary, has potentially the opportunity to introduce a vulnerability, either willfully, or, in a more likely scenario, despite themselves, as a vector. From development to QA, release engineering, production, etc. any step along the way can be leveraged by third parties to infect your software. Let’s be clear, with such a surface exposed, not implementing continuous security as part of CI/CD is a recipe for disaster.
Organizations that have moved to CI/CD are in a very different scenario. Applications that rely on a fully automated process, from software to production, offer much less surface for such attacks. Indeed, in a fully automated scenario, the source code is directly pulled from a repository and brought to production through a series of steps that can’t be tampered with. Any change must either happen in the source code (hence will be visible and versioned), or in the pipeline definition (which, similarly, is stored as source code). As for the production environment, a lot of organizations are now relying on infrastructure as code (or GitOps, at the core of how Jenkins X operates ), which then benefits from the advantages: full visibility, traceability and versioning.
The impact is huge. Instead of having to deeply inspect each and every release of your software to make sure it hasn’t been tampered with, the focus can now be brought to the CI/CD process itself: if your process passes the inspection, then any execution of that process will satisfy your requirements. And in case you decide to evolve your process, then – and only then – will you need to re-inspect your process - more specifically, the section that changed - which could have a very small scope.
Similarly to the misconceptions with public clouds, when properly done, continuous delivery *increases* your ability to secure your software delivery, while increasing velocity and reducing the need to manually inspect each release for tampering.
Your CI/CD process IS your infrastructure
Along that journey, how can CloudBees help you?
A lot of companies are using Jenkins today. Typically, a number of development teams started using Jenkins as a way to automate the early steps of their lifecycle and avoid the “but it works on my computer” drama. But, step by step, their usage of Jenkins started expanding further to the right, to eventually capture the entire application lifecycle. Throughout that soft transition, a lot of organizations didn’t really recognize that Jenkins started playing a very different role. From a non-critical tool used “by developers, for developers,” Jenkins emerged as the overarching orchestration engine that produces every bits in an organization: if your tool is down, you can’t release software anymore! Jenkins has become, in most organizations, a business-critical environment that should benefit from the same level of scrutiny as any production system.
But that’s not just the extent to which Jenkins has become critical… You need to make sure your CI/CD system and environment (any tools used as part of your CI/CD process, source and binary repos, etc.) are secure. If it is not, it can act as an amazing vector for transparently infecting all of the bits you are producing through that instance. Let’s be very clear: Jenkins (or any CI/CD system) is an absolutely critical piece of infrastructure, high availability-wise and security-wise.
Yet, I still meet with lots of organizations that don’t implement any proper practices on that front. First, it is not always clear where they are getting their binary from. Then, they install it and expand their usage of Jenkins without carefully planning for scale and adoption. Which tends to lead to unstable “Jenkinstein” environments they rarely try to sanitize. And ultimately, by fear of “breaking something that works,” they don’t upgrade their Jenkins instances. This is the perfect recipe for security drama.
CloudBees helps you get access to certified bits that we thoroughly test and vet. We also provide you with support straight from the experts. Our customers not only use our support when things go wrong, we are actually part of your DevOps journey and we help you sanitize your existing Jenkins environment and work with you to plan your future architecture as you expand. Last but not least, thanks to the BeeKeeper Assistant in our CloudBees Assurance Program and our support organization, we make sure you can continuously and safely upgrade your Jenkins instance to the latest bits: CloudBees is investing a lot of resources in proactive and reactive security fixes . If you are not updating Jenkins, it means your software factory is exposed to known vulnerabilities. This should be unacceptable to any organization.
If you want to go beyond this, you can continue your DevOps journey with us and leverage CloudBees Core , which not only will help you set up a strong and secure centralized CI/CD environment at scale for your organization (while still empowering your teams locally), but also increasingly makes it possible to establish best practices and governance to ensure the software your teams are releasing satisfies what’s important to your organization. Our goal at CloudBees is to make sure you can enable your teams to innovate and be productive, while satisfying your organization’s desire to not end up in a Bloomberg feature article for the wrong reasons.
Onward.
Sacha Labourey
CEO and co-founder
CloudBees
A native of Switzerland, Sacha graduated from EPFL in 1999. At EPFL, he started Cogito Informatique, an IT consulting business. In 2001, he joined Marc Fleury’s JBoss project as a core contributor, implementing JBoss’ original clustering features. He went on to become GM for JBoss Europe, leading the strategy and helping to recruit partners that fueled JBoss’ growth. In 2005, he became CTO and oversaw all of JBoss engineering. In June 2006, Red Hat acquired JBoss. As CTO, Sacha played a crucial role in integrating and productizing the JBoss software with Red Hat offerings. Sacha went on to became co-General Manager of Red Hat’s middleware division. He left Red Hat in 2009 and founded CloudBees in March 2010. Follow Sacha on Twitter .