Running secured images in Kubernetes environments with Harbor and Notary

Alex Tesch
7 min readJun 29, 2020

“Security is a perception… what is considered safe for one entity may not be considered safe for others.”

I still remember those opening words from our Security and Cryptography teacher when I was doing my undergraduate studies in computer science about 20 years ago.

That definition still stands today, even with the ever changing IT landscape.

When it comes to container security, one of the first things that comes to mind for most Kubernetes admins and developers is Clair — the Open Source image scanning project by CoreOS.

Clair is currently used by popular container registries, in the likes of Quay and Harbor — to scan the images stored in the repository and detect package vulnerabilities as reported in the up-to-date Common Vulnerabilities and Exposures (CVE) Database.

Clair is an excellent tool in our security arsenal, however it doesn’t address all the risks associated with container images that will be run in our Kubernetes cluster.

Clair can be considered our first level of defense, as most attackers will try penetrate systems using known vulnerabilities that haven’t been patched.

it is not uncommon, even for Government institutions - to run a few security patches behind and fall pray of script kiddies targeting documented exposures.

We also need to keep our guard up against more sophisticated attacks, those that require a bit more of creativity — which are a lot harder to detect if they take place…

The CIA triad of information security, what our old cryptography teacher used to refer as “The Security Trinity” — has three core pillars: Confidentiality, Integrity and Availability.

It is integrity, were I want to place the spotlight of this article.

How do we know that those containers that we are running in our Kubernetes environment are really based on images created by our trusted developers?

Can we tell if our container images have been tarnished?

Are we actually running some malicious code in out platform that we are not aware of?

This can be easily addressed with a proper container signing protocol.

By allowing our developers to sign the images that they push into our private registry, we can rest assured that the containers that we will run in production are coming from a trusted party.

Having the right image signing framework in place is important, as they are different ways to achieve this objective.

Why not GPG?

GPG keys are widely known in the Red Hat world as the tool to sign RPM packages that will be distributed and installed in our RHEL servers.

The GPG approach that has been used quite extensible for package distribution, has been promoted by Red Hat as “simple signing” for container images as well.

Although this is a simple implementation that certainly can get the job done, it is quite a flawed approach…

Let’s say that we have a developer that has signed and pushed to a container registry more than a thousand images, then he suddenly realizes that the system that holds his private key has been compromised — potentially granting the capability to an unknown third party to sign container images as if they were created by him…

This has a solution, but not a pleasant one… we will need to inform the end users that the key has been compromised (that key will need to get blacklisted), then we will proceed to create a new GPG key pair — re-sign all the thousand images that were created and make available the new public key to the end users.

There is also the issue with “freshness”. All the image versions that have been updated in the registry would have been signed by the same GPG key. There is no distinction in terms of key signature between two different releases.

Yes, the registry will know which is the most up-to-date image, but not because of the key used to sign it…

Introducing The Update Framework

The Update Framework (TUF) is the first graduated project in the CNCF security landscape.

TUF was designed with the premise that all software repositories will be compromised at a certain stage, therefore it incorporates separation of signing duties techniques that make possible to minimize the impact of a stolen private key in the environment.

The goal is achieved by implementing a hierarchy of keys shown in the diagram below.

The Root key is the master key that rules them all, the Root key must be kept offline and secured. This key must be protected at all cost since its at the core of the framework.

The Root key can create and replace (in the event of compromise) the Timestamp, Snapshot and Target keys.

The Timestamp key gives us “Freshness” which is the capability to determine which is the most recent version of a particular package (or image).

The Snapshot key provides a consistent view of the metadata available on a repository.

The Target key, is the one that is used to sign the packages (or images) in our repository. A Target key can issue “Delegation keys” to other developers as well, in order to have accountability in the framework.

Timestamp and Snapshot keys are usually updated on a daily basis, they will have short expiration in the framework.

Root and Target keys expire less often and are usually valid for a year.

The Update Framework is definitely a better approach for our image signing needs, but how do we implement it?

Enter Notary

Notary is an incubating Open Source project under the CNCF security landscape, It was built on top of The Update Framework by Docker Inc.

The Notary service architecture involves a Notary Server, responsible of storing and updating the signed TUF metadata files, and a Notary Signer — which stores private keys and signs the metadata for the Notary Server.

The first container repository that ever leveraged on Notary to sign container images was Docker Trusted Registry (DTR), developed by Docker Inc and distributed as part of Docker Enterprise — DTR includes an opinionated implementation of Notary and it is best deployed in HA by making use of 3 virtual machines.

DTR is great for those organizations that want to keep the registry as a separate entity outside the Kubernetes cluster. However, if our goal is to deploy a container registry with Notary capabilities in our Kubernetes environment, Harbor is my pick of the bunch.

Harbor, an open source repository developed and led by VMware — just became the first graduated project under the CNCF container registry landscape.

Harbor is extremely easy to deploy in a Kubernetes cluster if we leverage on Helm, we can use the five steps below to get a Harbor registry up and running with Notary integrated:

  • $ helm repo add harbor https://helm.goharbor.io
  • $ helm fetch harbor/harbor -- untar
  • Edit the values.yaml file to specify your ingress for notary and harbor (core).
  • $ kubectl create ns harbor
  • $ helm install -n harbor harbor .

We can monitor the pods creation in our harbor namespace, they should all be running within a few minutes.

Once we have Harbor, the Notary server and signer running, we can proceed to register our first container image in the repository.

We must set the DOCKER_CONTENT_TRUST_SERVER environment to point to our notary ingress.

$ export DOCKER_CONTENT_TRUST_SERVER=https://<notary-ingress>

We must set as well the DOCKER_CONTENT_TRUST environment to 1, to let Notary do it’s magic

$ export DOCKER_CONTENT_TRUST=1

The next time that we perform a docker push from that console, we will be prompted to provide a password for our private key creation (the key that will sign our images).

If all goes smooth, we should be able to see in our Harbor GUI that the container image was successfully signed.

Now that the image has been signed in our repository, we can rest assured that our end users will know that the application is delivered from a trusted party.

In the video below, Nicky and I demonstrate the advantages of Notary in a production environment that has been compromised.

https://www.youtube.com/watch?v=rwqCcS2zPCE

It is important to keep in mind that Notary was designed to make good security signing practices easy for container images, however we must remember that the framework validates the publisher — not the safety of their content.

It will be wise to include source code scanning tools in our pipelines to make sure that our application code is safe and hard to exploit.

The last words of my cryptography teacher at the end of the semester were the following, “As our final session comes to an end, I leave you with our definition of security from our first class — security is a perception, I truly hope that your perception has been enhanced after the past few months and that you are better prepared to protect your systems and your end users”

Thank you for reading!

Alex

--

--

Alex Tesch

Chief Solution Architect at HPE who enjoys to automate all things possible. Always in the look out for new building blocks for the ultimate CI/CD pipeline.