-
Notifications
You must be signed in to change notification settings - Fork 101
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add --context flag #197
Add --context flag #197
Conversation
Welcome @maruina! |
Hi @maruina. Thanks for your PR. I'm waiting for a kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@maruina Hi. Thanks for the contribution. This is a good start.
I left some comments about the context
name. Also, I don't see where in the code you are passing the kubernetes context to the the client. I imagine that is what you want, correct ?
pkg/envconf/config.go
Outdated
@@ -44,6 +44,7 @@ type Config struct { | |||
dryRun bool | |||
failFast bool | |||
disableGracefulTeardown bool | |||
context string |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To avoid confusion with Go context, I would suggest naming this k8sContext
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We have a kubeconfig
, what do you think about kubecontext
? I am asking so we have only one way to prefix kubernetes things
pkg/flags/flags.go
Outdated
@@ -175,6 +181,11 @@ func (f *EnvFlags) DisableGracefulTeardown() bool { | |||
return f.disableGracefulTeardown | |||
} | |||
|
|||
// Context returns an optional kubeconfig context to use | |||
func (f *EnvFlags) Context() string { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same here, I would suggest naming the method KubernetesContext
No :P
What I could do is:
It sounds a little convoluted so I'm not 100% how would you like to do it or if you have a better suggestion |
@maruina oh yes, you are right. There is already a method that can apply the Kubernetes cluster context at test time. So, my next question to you, do you have to pass your cluster context name as a CLI argument ? You could use an environment variable (i.e. KUBE_CONTEXT) and read it from your test, then apply that method. That way you won't even have to wait for PR or a code change. |
@maruina I went and look at the code again. If you definitely want to move with this, introduce a
It sounds complicated, but we are here to assist you in getting this in. |
Correct me if I'm wrong, but in the test we usually call Regarding the flag, thank you very much for your input, I will follow up with your suggestions |
@vladimirvivien I think that I followed your suggestions up to this point
When I look at the code, we call client, err := klient.NewWithKubeConfigFile(c.kubeconfig) to create a client with a specific kubeconfig, so my understanding is that I need to provide a new method or change Do you want me to check if So I can call NewWithContextName(fileName, context string) but I probably need something like
but again, it feels a little too convoluted. What am I missing? |
@maruina I think you are getting closer. In func (c *Config) NewClient() (klient.Client, error) {
if c.client != nil {
return c.client, nil
}
if c.kubecontext != "" {
client, err := klient.NewWithContextName(c.kubeconfig, c.kubeContext)
if err != nil {
return nil, fmt.Errorf("envconfig: client failed: %w", err)
}
c.client = client
}else{
client, err := klient.NewWithKubeConfigFile(c.kubeconfig)
if err != nil {
return nil, fmt.Errorf("envconfig: client failed: %w", err)
}
c.client = client
}
return c.client, nil
} |
@vladimirvivien could we not just offload all calls from |
Sorry in advance for the really lengthy comment. If you check the args generated for the binaries of e2e-framework, it already has a ❯ ./flags.test --help 2>&1 | grep -A2 kube-cont
-kube-context string
Used to provide a custom KubeContext override for the file passed to the --kubeconfig
-kubeconfig string e2e-framework/klient/conf/config.go Lines 112 to 122 in cd133e2
I think we should reuse this and modify the workflow as bit in terms of how the client is created. func New(fileName string) (*rest.Config, error) {
// if filename is not provided assume in-cluster-config
if fileName == "" {
return rest.InClusterConfig()
}
if kubeContext := ResolveClusterContext(); kubeContext != "" {
return NewWithContextName(fileName, kubeContext)
}
// create the config object from k8s config path
return clientcmd.NewNonInteractiveDeferredLoadingClientConfig(
&clientcmd.ClientConfigLoadingRules{ExplicitPath: fileName}, &clientcmd.ConfigOverrides{}).ClientConfig()
} This should keep the code change to really minimal ? cc @vladimirvivien If we are to add a new one, I think here are a few suggestion. func NewWithKubeConfigFileAndContext(kubeConfig string, kubeContext string) (Client, error) {
if kubeContext == "" {
return NewWithKubeConfigFile(kubeConfig)
}
cfg, err := conf.NewWithContextName(kubeConfig, kubeContext)
if err != nil {
return nil, err
}
return New(cfg)
} I think we can add a function like so on the func (c *Config) NewClient() (klient.Client, error) {
if c.client != nil {
return c.client, nil
}
client, err := klient.NewWithKubeConfigFileAndContext(c.kubeconfig, c.kubeContext)
if err != nil {
return nil, fmt.Errorf("envconfig: client failed: %w", err)
}
c.client = client
return c.client, nil
} This will keep the isolation intact to a good extend. Also a few other things to consider,
|
Hello @harshanarayana, thank you very much for your help. Otherwise, I can follow your suggestion and go with the |
I still pushed the change so if we are happy with that, we can test it :) @harshanarayana from which branch did you try the flags? I'm trying with the latest main but it seems I can't see that
|
Oh no! I think I made a mistake. I'm terribly sorry about that.. I was on your PR branch and had modified the arg name.. I totally forgot about that when I ran the help command.. That's how the variable showed up there... You still have to add that arg yourself.. |
@harshanarayana no worries. I pushed again the flag, and I was able to target a specific cluster using
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @maruina for this awesome contribution.
/retest |
@maruina: Cannot trigger testing until a trusted user reviews the PR and leaves an In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/ok-to-test |
go: downloading github.com/josharian/intern v1.0.0
E0217 13:49:12.378771 9620 env.go:352] Recovering from panic and running finish actionsenvconfig: client failed: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory
program not built with -cover
ok sigs.k8s.io/e2e-framework/examples/crds 41.446s
=== RUN TestListPods
=== RUN TestListPods/pod_list
=== RUN TestListPods/pod_list/pods_from_kube-system
k8s_test.go:33: envconfig: client failed: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory
--- FAIL: TestListPods (0.00s)
--- FAIL: TestListPods/pod_list (0.00s)
--- FAIL: TestListPods/pod_list/pods_from_kube-system (0.00s)
FAIL
coverage: [no statements]
FAIL sigs.k8s.io/e2e-framework/examples/custom_env_funcs 38.357s
E0217 13:50:14.534158 15810 env.go:352] Recovering from panic and running finish actionsenvconfig: client failed: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory
program not built with -cover
ok sigs.k8s.io/e2e-framework/examples/decoder 20.[72](https://prow.k8s.io/view/gs/kubernetes-jenkins/pr-logs/pull/kubernetes-sigs_e2e-framework/197/pull-e2e-framework-test/1626578895165722624#1:build-log.txt%3A72)6s Seems like new changes are breaking the tests |
@vladimirvivien I think that I actually broke something, let me see where the problem is because I think I tested everything before adding the two new methods so I'm not sure how I broke things |
/hold |
Add --kube-context flag Require --kubeconfig when passing --kube-context Add --kube-context documentation Update klient/conf/config.go Co-authored-by: Harsha Narayana <harsha2k4@gmail.com> Update klient/conf/config.go Co-authored-by: Harsha Narayana <harsha2k4@gmail.com> Fix indentation Use kubeContext instead of context to avoid clashing
@maruina ping me when you think you got it fixed. |
@vladimirvivien all the tests are passing locally, can you please retest? |
/retest |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: harshanarayana, maruina, vladimirvivien The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/hold cancel |
/lgtm |
Thanks @maruina !! |
Thanks both of you for your patience and help |
Fix #187
@vladimirvivien I know this is not what you suggested, but I spent some time looking at the cli-runtime and couldn't figure out how to wire it.
Since this is blocking our adoption and it is a pretty simple change, I'd like to ask you if we can still merge it and then I can try to take again a look at the cli-runtime fix