Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add scenarios for key discovery and prioritized registries. #66

Open
wants to merge 6 commits into
base: main
Choose a base branch
from

Conversation

mnm678
Copy link
Contributor

@mnm678 mnm678 commented Apr 27, 2021

I added some scenarios based on the discussion during the last meeting.

Signed-off-by: Marina Moore <mnm678@gmail.com>
scenarios.md Outdated

### Scenario #12: Using multiple registries

A user using multiple registries will want to ensure artifacts are downloaded from the intended registry. For example, if an artifact is supposed to be downloaded from their private registry, they never want to download a version from a public registry. Additionally, they may always want to look in their private registry first, and so want an enforced ordering of registries.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we want to actually care about which registry it comes from? Or do we want to stick to specifying which keys may sign something?

So long as repo1 re-signs all the content it serves with repokey1, and repo2 does the same with repokey2, we can achieve the same thing using just keys.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This could be implemented with controls over which keys are trusted. But either way, users need to be able to specify which artifacts come from which registry/root of trust.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The offline signing requirement decouples the signing from the persistence. They keys wouldn't be associated with a repo or a registry.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This could be implemented with controls over which keys are trusted. But either way, users need to be able to specify which artifacts come from which registry/root of trust.

Agreed, but "which registry" and "which root of trust" are very different to implement.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I updated this to talk about roots of trust rather than registries. In some cases, this will be the same thing, but in others the root of trust might be smaller/larger than a single registry, so hopefully the new wording makes this more clear.

scenarios.md Outdated
**Implications of this requirement**

1. Users must be able to prioritize each registry that they use.
1. Users must be able to specify that a particular artifact may only be downloaded from a particular registry or set of registries.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So a strawman configuration might be:

keys:
  key1:
    -namespace: *
     priority: 99
  key2:
    -namespace: wabbit/*
     priority: 50
  key3:
    -namespace: wabbit-modified/*
     priority: 20
  key4:  # my personal key
    -namespace: *
     priority: 5

(here lower number is higher priority, like i said just a strawman)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I would also add a 'terminating' field to indicate that if a particular image isn't found where indicated, you shouldn't try lower priority registries.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We also debated moving this to a policy level configuration.

@mnm678 mnm678 changed the title Add scenarios for key discovery and prioritized regitries. Add scenarios for key discovery and prioritized registries. Apr 30, 2021
scenarios.md Outdated

If a user does not have a specific key, verified using a third party system, they will need to determine the trusted signing key(s) for an artifact using a secure default method.

1. The user determines the default trusted key(s) for a specific artifact using information available on the registry.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can the information be obtained from the signature?
We're trying to make registries agnostic of additional information on the content, and have the content represent itself. This way, as content is moved within and across registries, the payload contains all the necessary information.

scenarios.md Outdated

**Implications of this requirement**

1. Users must be able to obtain per-package trusted keys.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Totally agree the keys are associated with the artifact, and not the repo or registry. This provides flexibility for content movement. I'm just not sure what a default key is in this context.
A collection of wabbit-networks content may be signed with a single key.
A collection of docker trusted content, may be signed with a single key. Is that the default?

scenarios.md Outdated

### Scenario #12: Using multiple registries

A user using multiple registries will want to ensure artifacts are downloaded from the intended registry. For example, if an artifact is supposed to be downloaded from their private registry, they never want to download a version from a public registry. Additionally, they may always want to look in their private registry first, and so want an enforced ordering of registries.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The offline signing requirement decouples the signing from the persistence. They keys wouldn't be associated with a repo or a registry.

scenarios.md Outdated

If a user does not have a specific key, verified using a third party system, they will need to determine the trusted signing key(s) for an artifact using a secure default method.

1. The user determines the default trusted key(s) for a specific artifact using information available on the registry.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How does this differ from Trust on First Use?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I clarified that the default keys should be determined using a trusted root.

This commit:
* Clarifies that default trusted keys should come from a trusted root
* Updates scenario 12 to discuss roots of trust more generally,
rather than registries.

Signed-off-by: Marina Moore <mnm678@gmail.com>
Copy link
Contributor

@sudo-bmitch sudo-bmitch left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think scenario 10 is almost universally accepted, but there's a lot of discussion happening on 11 and 12. Given that, it might make sense to break 10 out into a separate PR to get it approved.

scenarios.md Outdated

### Scenario #11: Using a default trusted key

If a user does not have a specific key for a given artifact, verified using a third party system, they will need to determine the trusted signing key(s) for an artifact using a secure default method.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this saying a method is needed to communicate the chain of trust between the signing key and the root key? If so, having a way to query the registry for that chain is one option. Another option is to include the intermediate certificates in the signature. Depending on how we implement this, it may or may not be a separate query from retrieving the signature.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here's a quick sketch of my thoughts on one possible way of including a certificate chain blob with the signature artifact. The only downside is updating the certificate chain (e.g. if a key in the chain is expiring) requires all signature artifacts to be replaced (but the signature blob would be untouched so we shouldn't need to resign images). Should this scenario include the ability to update the certificate chain?

nv2-cert-chain-blob

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Having all of the information collected into one query makes sense, especially for moving signatures between registries, etc. For the requirements here I was trying to avoid too many implementation details, but I'll add something about updating the chain from root.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mnm678 it's not clear what "secure default method" means.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How is this different from what we were discussing with the last scenario of configuring a key to trust for all artifacts? I think I'm missing the reasoning behind treating a "default" key as special, and not just a standard key that can be used for any artifact.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One key aspect of Notary v2 is the decentralized model, where the registry stores signatures to artifacts, but key storage is deferred to key management systems. If we can include some chain of trust content in the signature, and it can define what root key could be used, greatness.

scenarios.md Outdated

### Scenario #12: Using multiple roots of trust

A user using multiple registries will want to ensure artifacts are verified using the correct root of trust. For example, if an artifact is supposed to be signed by a key delegated to their private registry, they never want to download a version signed by a key from a public registry. Additionally, they may always want to look for artifacts signed by their private registry key first, and so want an enforced ordering of roots of trust.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this attempting to prevent dependency confusion attacks? Since all image pulls are uniquely named, we don't have the dependency confusion attacks where a user attempts to pull an image from a private registry and accidentally downloads from a public one. For a mirrored registry, the pull is either explicit to use that mirror, or the mirror is treated as the same as the upstream registry and the risk is actually reversed (that the mirror has become stale).

As a user, I'd like to invert the logic on this. Rather than searching for the correct key for a specific repo, I'd like to be able to scope what repos I trust a key to sign. If it's an internal company key, I may trust it for all scopes. For the Docker Inc key, I may trust it for docker.io/library/*. And for a vendor key, I may trust for that vendor's namespace.

If a user pulls from a mirror, and checks the signature on that mirror reference, they either need to add more scopes onto the keys, or they need a way to specify registry mirrors as a shorthand to multiply the scopes. But I think that gets into implementation details that we don't need to sort out in the scenarios doc just yet.

Given this, I don't think there's a need to prioritize roots of trust. If a key is trusted for a given scope, the signature should be considered valid.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I updated to use the inverted logic you propose, but I think that this still requires an ordering of the roots of trust that delegate these keys. If you have a root of trust for internal company keys, and a root of trust for public registry keys, you'll want to prioritize the internal root of trust, then ensure that each individual key has a well-defined scope within the scope of the root of trust. Basically, you start by saying 'I trust this root of trust to tell me keys for these registries', then each delegation says 'use this key for this subset of registries/repositories', until you know the exact key to use for verification.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the keys are decoupled from repositories, a publisher may use the same key and push artifacts across different repository/registry. Also, it's unclear why and ordering of trusted roots is required. What role does the ordering play in signature verification?

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To me, configuration ordering is a client-side configuration concept, and the (interoperability-focused) signature specification shouldn’t need to care.

Whether the client configures (with priorities and fallback and termination clauses)

private-registry.example:
    prio: 100
    keys: my-private-key
public-registry.com: # (when mirrored) accept my-private-key for private overrides of publicly-hosted content
    prio: 99
    keys: my-private-key
public-registry.com/vendor:
    prio: 10
    keys: vendor-private-key
    stop-search # don't try the default CA, we know the right key for this vendor
public-registry.com:
    prio: 0
    key-ca: public-registry-ca-key # let the registry provide per-repo keys

or (with only using the most-matching clause and ignoring everything else, potentially asking the users to copy&paste the wide-scoped keys into narrowly-scoped stanzas, OTOH being very explicit about the trusted keys):

private-registry.example:
    keys: my-private-key
public-registry.com/vendor:
    keys: vendor-private-key, my-private-key # (when mirrored) accept my-private-key for private overrides of publicly-hosted content
    # key-ca not specified: don't try the default CA, we know the right key for this vendor
public-registry.com:
    keys: my-private-key # (when mirrored) accept my-private-key for private overrides of publicly-hosted content
    key-ca: public-registry-ca-key # let the registry provide per-repo keys

is just a UX design concern, not a feature requirement: both express the same set of trusted keys.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Rereading

Additionally, they may always want to look for artifacts signed by their private registry key first

Is this just saying “some configuration should be processed first” as in other examples in this PR, or actually codifying the (WIP/proposed?) Notary V1 / TUF feature of signing absence, so that given prioritized (key-1, key-2), key-2 signatures for $repo are only accepted if we have a key-1 signed assertion that key-1 is not signing any content for $repo ?

scenarios.md Outdated

**Implications of this requirement**

1. Users must be allowed to configure specific trusted keys for specific artifacts.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this imply that the user provides a mapping of trusted key per artifact. I think the user should just provide the trusted key, and if what the user pulls based on a tag/digest (in a pod spec or any other execution service configuration) is signed by that key, the artifact is accepted.

Copy link

@mtrmac mtrmac Jun 4, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If I trust VendorAKey for VendorAProduct, that does not at all imply that I accept VendorAKey’s signatures for VendorBProduct or a PrivateInternalApplication (AKA the “if you enable Flash, Adobe can sign kernel updates” problem with Linux RPM signing).

Access starts with user intent to access some specific application, and only depending on which application is the user deciding where to pull from, and what keys to trust. Sure, some keys may be trusted for everything for specific users and specific deployments, but that’s a special case of the general principle, not the general case.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mtrmac I agree with that in principle, doesn't the tag or digest along with the repository url specify the product.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are we saying a separate key MUST be used for each artifact? Or, it's up to the publisher to decide the granularity?
Microsoft may use a different key for dotnet, vs. Dynamics CRM. Maybe they use the same key for all their office suite products, but isn't that really a publisher decision?

Copy link

@michaelb990 michaelb990 Jun 4, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1.

I'd imagine a way to specify a different set of trusted keys for each container / task / etc. -- basically, at container launch time. An example I am picturing is someone running a postgres image that needs to trust a 3rd party key from the maintainers of postgres AND a backend API service that connects to that database which must be signed by their own key.

scenarios.md Outdated

### Scenario #11: Using a default trusted key

If a user does not have a specific key for a given artifact, verified using a third party system, they will need to determine the trusted signing key(s) for an artifact using a secure default method.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mnm678 it's not clear what "secure default method" means.

scenarios.md Outdated
If a user does not have a specific key for a given artifact, verified using a third party system, they will need to determine the trusted signing key(s) for an artifact using a secure default method.

1. The user determines the default trusted key(s) for a specific artifact using information available on the registry, using delegations from a trusted root key.
1. The user downloads and verifies an artifact using Notary v2 and the default trusted key(s)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's unclear what default trusted keys mean, and as an extension what are non default keys, it seems to say there is some implicit trust.

scenarios.md Outdated

**Implications of this requirement**

1. Users must be able to obtain per-package trusted keys, verified by a trusted root.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggestion - s/package/artifact

scenarios.md Outdated

### Scenario #12: Using multiple roots of trust

A user using multiple registries will want to ensure artifacts are verified using the correct root of trust. For example, if an artifact is supposed to be signed by a key delegated to their private registry, they never want to download a version signed by a key from a public registry. Additionally, they may always want to look for artifacts signed by their private registry key first, and so want an enforced ordering of roots of trust.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the keys are decoupled from repositories, a publisher may use the same key and push artifacts across different repository/registry. Also, it's unclear why and ordering of trusted roots is required. What role does the ordering play in signature verification?

Signed-off-by: Marina Moore <mnm678@gmail.com>
Signed-off-by: Marina Moore <mnm678@gmail.com>
@sudo-bmitch
Copy link
Contributor

Updates to scenario 11 LGTM. It does a good job focusing on the goal without the implementation (e.g. we don't say the chain must be stored in the registry).

Signed-off-by: Marina Moore <mnm678@gmail.com>
Signed-off-by: Marina Moore <mnm678@gmail.com>
@mnm678
Copy link
Contributor Author

mnm678 commented Aug 19, 2021

As discussed, I moved the first scenario to #96, and added a discussion of scoping to the scenario in this pr.

The example may benefit from a diagram of possible uses of multiple roots, but I'm not sure if that fits well in this document or elsewhere? I'll work on creating something either way.

cc @sudo-bmitch @gokarnm @mtrmac

@@ -250,6 +250,21 @@ A weakness is discovered in a widely used cryptographic algorithm and a decision
1. Key revocation, chain of trust, etc. must all work for the expected lifetime of a version of the client software while these changes are made.
1. The actions that different parties need to perform must be clearly articulated, along with the result of not performing those actions.

### Scenario #12: Using multiple roots of trust

A user using multiple registries will want to ensure artifacts are verified using the correct root of trust. For example, if an artifact is supposed to be signed by a key delegated to by the root of trust for their private registry, they never want to download a version signed by a key delegated from from a public root of trust for a public registry. Additionally, if there are multiple roots of trust that are trusted for a particular package, they want to ensure that these are priortized to ensure a consistent resolution.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For example, if an artifact is supposed to be signed by a key delegated to by the root of trust for their private registry, they never want to download a version signed by a key delegated from from a public root of trust for a public registry

This wording implies the roots of trust are correlated with a registry.
To facilitate content-copying within and across registries, there are three elements to Notary v2

  1. Registries are storage buckets, that understand references
  2. Entities sign artifacts, irrespective of what registry they may reside in. Entities maintain their private and public keys outside the registry.
  3. Artifacts are signed, and the signatures travel with them when copied.

I should be able to validate the net-monitor:v1 image independently from the registry by which I get it.

Scenarios:
Wabbit-networks publishes their net-montior image to:

  • registry.wabbit-networks.io/products/net-monitor:v1
  • docker.io/wabbitnetworks/net-monitor:v1

ACME Rockets may import the net-monitor image from either location.

The image is available at:

  • registry.acme-rockets.io/products/networking/net-monitor:v1 - the central location for teams to pull acme rockets certified content
  • registry.acme-rockets.io/production/net-monitor:v1 - an air-gapped environment used for production.

Where the content happens to exist, at that point in time, what correlation does that have with who signed the net-montior image?

Priority Ordering:
We need to support inclusion and exclusion capabilities, but I thought we agreed priority ordering is problematic as the ordering has all sorts of squatting exploits.

Wasn't this PR going to focus on discovery? These all look like validation scenarios.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I associated the roots with the registries as an example, that would correspond to something like the Docker root of trust, etc. I can remove that as it seems to be adding confusion.

I would actually argue that the ordering attacks mostly come from not explicitly stating the order in which roots should be used. @sudo-bmitch proposed not allowing overlapping scopes, which would be one way to prevent these priority attacks. I tried to find a middle ground in this pr that encourages users to define non-overlapping scopes, but allows for setting priorities if/when scopes of various roots do overlap. (I'm not sure I explained the priority attacks very well in the call. It's explained pretty well in section 5.2 of this paper: https://theupdateframework.io/papers/protect-community-repositories-nsdi2016.pdf?raw=true)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Examples are great, and I think it helped identify the discussion. Conceptually, the content is goodness. Can we just tweak it to address them individually?

  • keys aren't associated with registries. Rather entities. If you can say the wabbit-networks root key has no direct correlation to content being validated from registry.wabbit-networks.io, vs. registry.acme-rockets.io/*
  • Possibly separate the validation scenarios to a policy pr
  • the priority ordering also aligns with the policy content

@sudo-bmitch
Copy link
Contributor

sudo-bmitch commented Aug 19, 2021

Revisiting the priority ordering discussion, it might help for me to write out my scenario for why I don't see the value of the priorities and perhaps @mnm678 can describe one where it is needed.

Lets assume 2 clusters, dev and prod. In dev, I require one key from the following scoped roots:

  • Organization root that is scoped to the world. Anything they sign, we trust.
  • Dev root, also scoped to the world.
  • Docker Library root, scoped to a mirror of Docker official images.
  • Wordpress root, scoped to a local mirror of the wordpress repo.

And prod would not include that dev root, but is otherwise the same:

  • Organization root that is scoped to the world. Anything they sign, we trust.
  • Docker Library root, scoped to a mirror of Docker official images.
  • Wordpress root, scoped to a local mirror of the wordpress repo.

For developers building stuff in CI, they get a key under the dev root to sign all their local work and run anything they want. And they can run anything the parent organization has signed. If they try to run an unmodified wordpress image from their local mirror, they really don't care which of the three roots has signed the image (organization, dev, or wordpress), each are equally valid for that image in my mind. That gives them flexibility to extend the wordpress image to fix some bug and push it to the same repo with a different tag and still deploy it using their dev key. However if the wordpress key is ever the only key signing the mirror of library/alpine, the policy would reject that since the wordpress key isn't scoped for that repo.

Similarly in production, if they verify an image that has a dev signature attached, that signature is ignored and they search for one from the organization (or one of the other trusted keys in their defined scope). Seeing the dev key, or lack there of, doesn't impact the approval, they just continue searching until a key that does satisfy the policy is found. And similar to dev, if they deploy the alpine image, it doesn't matter if they find the signature for the Organization or the Docker Library, both would be accepted by the policy, so I can't come up with a reason where priority would matter.

Note that Notary v2 is only verifying that a given image with a digest, and possibly the tag, is signed by a trusted entity. When Notary is called, the admission controller has already been told which image, and images are referred to with an explicit registry name, so we've been fortunate to not be subject to the dependency confusion attacks (with the downside that people hitting Docker Hub rate limits have to modify their deployment to point to a different registry hostname to use a mirror).

My question is only on the priority part, so if I'm missing something, please let me know. Other than that, I'd like to see the rest of this approved since scoping of keys is important to me, and I believe to many others too.

@mnm678
Copy link
Contributor Author

mnm678 commented Aug 20, 2021

@sudo-bmitch Thanks for the example, I think this makes it more clear what we're talking about.

In this scenario, is it possible for different trusted parties, say dev and wordpress to sign different artifacts for the same tag? This is the case where priority is important, so that the user can have a deterministic resolution of the verification. However, I think you're saying that the tag resolution is out of scope? If so, I guess this problem would pass to the admission controller, which would then be in charge of resolving the priority.

@sudo-bmitch
Copy link
Contributor

@mnm678 at any one time, a tag will only point to a single digest from the registry. (It gets more complicated than that with multi-platform images, but I don't think that's a factor for this.) The tag is typically mutable, so another manifest could be pushed to replace the tag, e.g. the dev team extends the upstream image and pushes their own version of of the wordpress:latest to their mirror, but that replaces the tag rather than giving multiple things the tag could return. From the perspective of the verifier, they'll only see a single digest when they check the tag, possibly signed multiple times. They'd then query the registry to see if that digest (and possibly the associated tag) is valid according to the policy.

I need to think a lot more on whether there's a clean way to inject notary between the tag to digest resolution process, and whether we want it to. Docker Content Trust did this with nv1, but that worked because it was tightly integrated with the client doing the pulling, and there was a secondary database of tag to digests maintained by the notary server. We're actually seeing an issue from that now because signing of official images hasn't happened since late 2020. So people with nv1 enabled are getting stale images from last year instead of more current/patched images, and there's no UX to tell them that's happening.

At least right now with the current design, we've ditched the external notary server, and left the tag to digest resolution happening with the registry, so we'll only ever get one value from that, which means there's nothing to prioritize.

@mnm678
Copy link
Contributor Author

mnm678 commented Aug 20, 2021

In that case, I think we'll need to update some of the requirements and scenarios to make it clear that Notary will no longer support tag signing. However, I worry that removing that will make it impossible to ensure many of the security guarantees that Notary is aiming for, including signature revocation and protection from rollback/freeze attacks.

But if Notary is just checking the signatures on a hash, then I agree that the priority attacks wouldn't apply.

@sudo-bmitch
Copy link
Contributor

At least with the next release, I don't think there will be any tag signing guarantees (which makes me push back if we try to call it GA). When we do get tag signing, I suspect it will be a much weaker guarantee than we have with nv1. When we do get tag signing, I believe it will be on the other side of the request, verifying that the tag is valid for a digest, rather than asking a notary server what digest should be pulled for a tag.

A potential implementation could have the signature include the descriptor that has the digest, but also add an annotation with an array of tags that the signer claims may be valid for that digest. It means multiple digests could all be valid for a common tag, unless those old signatures are revoked as new images are built. That may be a desirable quality for someone deploying with a pinned digest, or redeploying an image they pulled some time in the past and a scaling event was triggered.

There are other implementations that could improve the integrity (only one tag to digest mapping is valid), at the cost of availability (e.g. an ill timed scaling event finds the previously downloaded version of the image is no longer valid and a download is triggered, delaying the start of the container to handle the flood of requests). They each have tradeoffs, and I think it would be a good question to put to the community.

Independent of how tag signing is implemented, I think a freshness guarantee of the tag signing makes sense with registries, while priorities just doesn't fit our model, because we're not prioritizing pulling images from different registries,.

@mtrmac
Copy link

mtrmac commented Aug 25, 2021

And similar to dev, if they deploy the alpine image, it doesn't matter if they find the signature for the Organization or the Docker Library, both would be accepted by the policy, so I can't come up with a reason where priority would matter.

Consider an important vendor hosting their official images at docker.io/vendor; the consumer has a generic policy to require docker.io-hosted images to be signed with a Docker “correctly uploaded by a valid docker.io user” key, but the consumer also has a direct relationship with that vendor, and knows the right public key for that vendor. The consumer does want to accept only the known vendor’s key, not either of the vendor’s key and the generic docker.io key.

This is possible to express as “vendor’s key scoped to docker.io/vendor + Docker key scoped to docker.io” if scopes are exclusive, and the docker.io scope doesn’t apply to docker.io/vendor; if all matching scopes are merged and treated as equivalently trusted, we would need priorities — but I’d prefer to have exclusive scopes, because that allows much simpler analysis of the configuration.

@mtrmac
Copy link

mtrmac commented Aug 25, 2021

When we do get tag signing, I suspect it will be a much weaker guarantee than we have with nv1. When we do get tag signing, I believe it will be on the other side of the request, verifying that the tag is valid for a digest, rather than asking a notary server what digest should be pulled for a tag.

A potential implementation could have the signature include the descriptor that has the digest, but also add an annotation with an array of tags that the signer claims may be valid for that digest. It means multiple digests could all be valid for a common tag, unless those old signatures are revoked as new images are built. That may be a desirable quality for someone deploying with a pinned digest, or redeploying an image they pulled some time in the past and a scaling event was triggered.

Yes, I think this is the right trade-off. The TUF freshness/rollback-protection guarantees are just too costly to operate (signing is not an one-time deployment action but a product that needs to continuously run and re-sign freshness guarantees to avoid downtime), and only really relevant for users deploying :latest or similar moving tags, which is generally problematic in enterprise test/production deployments for many reasons anyway.

If version tags are not moved, and the image signer institutes a policy of never signing two different digests with the same version tag, we don’t need the freshness/rollback-protection guarantees and we can have a much simpler design.

@yizha1
Copy link
Contributor

yizha1 commented Jun 22, 2023

@mnm678 Would you mind closing this PR since no activities for more than 1 year? You can create a new issue to describe the problem if needed. Thanks.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

9 participants