-
Notifications
You must be signed in to change notification settings - Fork 813
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add secrets-store-csi-driver project #658
Add secrets-store-csi-driver project #658
Conversation
/assign @dims |
/cc @ritazh |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So we finally get to have the granularity discussion. :)
On its face this seems fine and consistent with precedent, but...
Should we force grouping? E.g. should sig- get a quota on stagings? Or something?
Or should we nest "directories" in GCR? E.g. instead of k8s.gcr.io/csi-foobar and k8s.gcr.io/csi-quuxzorb, should we encourage k8s.gcr.io/csi/{foobar,quuxzorb}?
@detiber re capi - would you prefer to have some form of nesting in the output? Would you prefer to have distinct stagings or a smaller, more onmibus set?
+1 for nesting, seems like the right paradigm, at least from a technical standpoint |
The main argument against nesting is that we didn't already do it for capi
):)
…On Fri, Mar 13, 2020 at 4:42 PM Linus Arver ***@***.***> wrote:
So we finally get to have the granularity discussion. :)
On its face this seems fine and consistent with precedent, but...
Should we force grouping? E.g. should sig- get a quota on stagings? Or
something?
Or should we nest "directories" in GCR? E.g. instead of
k8s.gcr.io/csi-foobar and k8s.gcr.io/csi-quuxzorb, should we encourage
k8s.gcr.io/csi/{foobar,quuxzorb}
<http://k8s.gcr.io/csi/%7Bfoobar,quuxzorb%7D>?
+1 for nesting, seems like the right paradigm, at least from a technical
standpoint
—
You are receiving this because your review was requested.
Reply to this email directly, view it on GitHub
<#658 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ABKWAVFWARIEQGD6HOMB4LDRHLAGPANCNFSM4LGTELMA>
.
|
I've added an agenda item to the next Cluster API meeting to discuss the topic of nesting staging repos. |
Note that there are 2 questions and they are orthogonal.
1) Should we merge smaller-scoped staging projects (probably no, IMO)
2) Should we "nest" related prod GCR directories?
On #2 specifically, the nesting would be in the FINAL, public, prod
container image name.
…On Mon, Mar 16, 2020 at 8:13 AM Jason DeTiberus ***@***.***> wrote:
I've added an agenda item to the next Cluster API meeting to discuss the topic of nesting staging repos.
—
You are receiving this because your review was requested.
Reply to this email directly, view it on GitHub, or unsubscribe.
|
814a0d5
to
5f0c4ab
Compare
@thockin Updated the PR based on the comments, PTAL! |
@thockin re capi, I'd be ok with all the sig-sponsored cluster api images going into a single bucket, with distinct paths. e.g. we currently have k8s-staging-cluster-api/cluster-api-controller and k8s-staging-cluster-api-aws/cluster-api-aws-controller (just to name 2). I'm fine moving everything under a cluster-api bucket (core cluster api + all sig-sponsored providers), if that works for everyone else. Also xref #671, which is requesting a staging bucket for the digitalocean capi provider. cc @timothysc @vincepri @CecileRobertMichon @yastij @randomvariable and I'm sure there are others I'm leaving off |
If we do end up reorganizing some of the final prod image paths, we should not delete the old (existing) promoter manifests because they will serve as a historical record. (I.e., let's not be forced to do this again). |
I'm +1 to organize all CAPI projects under the CAPI staging project. In terms of timeline, where are we looking to do this change? |
If we do organize under a single staging bucket for cluster-api, building on what @ncdc mentioned, I think it should be organized something like this:
This would allow for separation of ownership/access for sig-sponsored sub-projects that might have separate ownership on individual repos. Should we rally around a doc/issue for hashing a concrete proposal, including transition plan and maintaining the historical record (as well as maintaining backward compat)? |
/hold looks like lots of discussions here |
Yes. I just opened #696 let's take the discussion over there. /assign @thockin @dims |
/hold cancel |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks!
/lgtm
/approve
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: aramase, thockin The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is activated
Thank you @thockin! |
secrets-store-csi-driver
was recently moved to kubernetes-sigsgcr.io