-
Notifications
You must be signed in to change notification settings - Fork 517
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add network policy enforcement latency measurement #2234
Add network policy enforcement latency measurement #2234
Conversation
/assign @marseel |
0b8d7a0
to
3671f0c
Compare
/cc |
3671f0c
to
7a51f1e
Compare
...loader2/pkg/measurement/common/network-policy/manifests/dep-test-client-policy-creation.yaml
Show resolved
Hide resolved
...terloader2/pkg/measurement/common/network-policy/manifests/dep-test-client-pod-creation.yaml
Show resolved
Hide resolved
...loader2/pkg/measurement/common/network-policy/manifests/dep-test-client-policy-creation.yaml
Show resolved
Hide resolved
clusterloader2/pkg/measurement/common/network-policy/network-policy-enforcement-latency.go
Outdated
Show resolved
Hide resolved
clusterloader2/pkg/measurement/common/network-policy/network-policy-enforcement-latency.go
Outdated
Show resolved
Hide resolved
clusterloader2/pkg/measurement/common/network-policy/network-policy-enforcement-latency.go
Outdated
Show resolved
Hide resolved
clusterloader2/pkg/measurement/common/network-policy/network-policy-enforcement-latency.go
Outdated
Show resolved
Hide resolved
7d2dc96
to
c9000be
Compare
/lgtm As it's a non-trivial measurement I would like someone else to take a look too. |
...erloader2/pkg/measurement/common/network-policy/manifests/policy-egress-allow-apiserver.yaml
Show resolved
Hide resolved
/hold |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nitpicks, apart from that LGTM.
limiter := rate.NewLimiter(rate.Limit(policyLoadQPS), policyLoadQPS) | ||
|
||
for nsIdx, ns := range nps.targetNamespaces { | ||
baseCidr := fmt.Sprintf("10.0.%d.0/24", nsIdx) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The measurement is not suitable for more than 256 test namespaces.
Could we at least have some way of indicating this is not allowed by explicitly failing in Execute
when the user attempts to do so?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added octet := nsIdx % 256
to make it work for more than 256 namespaces, because the CIDR is not very important here, it's just testing the network policy load.
matchLabels: | ||
kubernetes.io/metadata.name: {{.TargetNamespace}} | ||
{{else}} | ||
namespaceSelector: {} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit: is this actually needed?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, because we are allowing ingress traffic from all test namespaces and not just from the one that the pods are in and policies applied.
More info: https://github.com/ahmetb/kubernetes-network-policy-recipes/blob/master/05-allow-traffic-from-all-namespaces.md
Selects all pods in all namespaces (namespaceSelector: {}).
By default, if you omit specifying a namespaceSelector it does not select any namespaces, which means it will allow traffic only from the namespace the NetworkPolicy is deployed to.
verbs: ["get", "list", "watch"] | ||
- apiGroups: ["networking.k8s.io"] | ||
resources: ["networkpolicies"] | ||
verbs: ["get", "list", "watch"] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nitpick: "watch"
for networkpolicies isn't required, is it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Apparently neither "list"
is being used. I'm removing both "list"
and "watch"
for network policies.
They are being used by the test client app https://github.com/kubernetes/perf-tests/tree/master/network/tools/network-policy-enforcement-latency
} | ||
|
||
func (nps *networkPolicyEnforcementMeasurement) run(config *measurement.Config) error { | ||
// Should this be mandatory? Fail if not provided. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should this get a TODO or is it an unfinished leftover?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Removed.
return nil | ||
} | ||
|
||
func (nps *networkPolicyEnforcementMeasurement) run(config *measurement.Config) error { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could we add some check to run
that setup
was run beforehand?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done. Used nps.framework == nil
to check if setup was run before.
c9000be
to
1ca5029
Compare
1ca5029
to
02a316d
Compare
/lgtm |
02a316d
to
c4cb115
Compare
/lgtm |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: dlapcevic, marseel, tosi3k The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
@marseel can we please merge now? |
I think you just need to cancel the hold :). /hold cancel |
Thanks @tosi3k. I didn't know that I had a permission to do it. |
Description
A new measurement that utilizes network policy enforcement latency test clients.
https://github.com/kubernetes/perf-tests/tree/master/network/tools/network-policy-enforcement-latency
This measurement is intended to be used in the ClusterLoader2 load test.
https://github.com/kubernetes/perf-tests/tree/master/clusterloader2/testing/load
Network policy enforcement latency is the required time for any change to pods or network policies to be reflected in the system. It's measured by simulating a real scenario with workloads that are connecting to each other, and measuring the time it takes for the network policies to be honored for new pods or network policy changes. The relevant scalability and performance dimensions are pod and policy churn rate (create, update, delete).
Usage
The measurement tests network policy enforcement latency for two cases:
Created network policies
Deploy the test clients (setup and run) with "testType" flag set to "policy-creation" after creating the target pods.
Created pods that are affected by network policies
Deploy the test clients (setup and run) with "testType" flag set to "pod-creation”, before creating the target pods. Target pods are all pods that have the specified label: { targetLabelKey: targetLabelValue }.
The test is set up by creating the required resources, including the network policy enforcement latency test client pods that measure the latencies and generate metrics for them.
/kind feature