Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[zh] translation for the run-app section #29451

Merged
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
translation for the run-app section
  • Loading branch information
steven-my committed Aug 23, 2021
commit 742e7d7ee4bf31e8f228258107acf37f63e2c119
4 changes: 2 additions & 2 deletions content/zh/docs/tasks/debug-application-cluster/audit.md
Original file line number Diff line number Diff line change
Expand Up @@ -168,15 +168,15 @@ rules:

<!--
If you're crafting your own audit profile, you can use the audit profile for Google Container-Optimized OS as a starting point. You can check the
[configure-helper.sh](https://github.com/kubernetes/kubernetes/blob/{{< param "githubbranch" >}}/cluster/gce/gci/configure-helper.sh)
[configure-helper.sh](https://github.com/kubernetes/kubernetes/blob/master/cluster/gce/gci/configure-helper.sh)
script, which generates the audit policy file. You can see most of the audit policy file by looking directly at the script.

You can also refer to the [`Policy` configuration reference](/docs/reference/config-api/apiserver-audit.v1/#audit-k8s-io-v1-Policy)
for details about the fields defined.
-->
如果你在打磨自己的审计配置文件,你可以使用为 Google Container-Optimized OS
设计的审计配置作为出发点。你可以参考
[configure-helper.sh](https://github.com/kubernetes/kubernetes/blob/{{< param "githubbranch" >}}/cluster/gce/gci/configure-helper.sh)
[configure-helper.sh](https://github.com/kubernetes/kubernetes/blob/master/cluster/gce/gci/configure-helper.sh)
脚本,该脚本能够生成审计策略文件。你可以直接在脚本中看到审计策略的绝大部份内容。

你也可以参考 [`Policy` 配置参考](/zh/docs/reference/config-api/apiserver-audit.v1/#audit-k8s-io-v1-Policy)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -202,7 +202,7 @@ This is an incomplete list of things that could go wrong, and how to adjust your
- Action: Use IaaS providers reliable storage (e.g. GCE PD or AWS EBS volume) for VMs with apiserver+etcd
- Mitigates: Apiserver backing storage lost

- Action: Use [high-availability](/docs/admin/high-availability) configuration
- Action: Use [high-availability](/docs/setup/production-environment/tools/kubeadm/high-availability/) configuration
- Mitigates: Control plane node shutdown or control plane components (scheduler, API server, controller-manager) crashing
- Will tolerate one or more simultaneous node or component failures
- Mitigates: API server backing storage (i.e., etcd's data directory) lost
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -110,30 +110,27 @@ kubectl exec -it cassandra -- sh
<!--
## Debugging with an ephemeral debug container {#ephemeral-container}

{{< feature-state state="alpha" for_k8s_version="v1.18" >}}
{{< feature-state state="alpha" for_k8s_version="v1.22" >}}

{{< glossary_tooltip text="Ephemeral containers" term_id="ephemeral-container" >}}
are useful for interactive troubleshooting when `kubectl exec` is insufficient
because a container has crashed or a container image doesn't include debugging
utilities, such as with [distroless images](
https://github.com/GoogleContainerTools/distroless). `kubectl` has an alpha
command that can create ephemeral containers for debugging beginning with version
`v1.18`.
https://github.com/GoogleContainerTools/distroless).
-->
## 使用临时调试容器来进行调试 {#ephemeral-container}

{{< feature-state state="alpha" for_k8s_version="v1.18" >}}
{{< feature-state state="alpha" for_k8s_version="v1.22" >}}

当由于容器崩溃或容器镜像不包含调试程序(例如[无发行版镜像](https://github.com/GoogleContainerTools/distroless)等)
而导致 `kubectl exec` 无法运行时,{{< glossary_tooltip text="临时容器" term_id="ephemeral-container" >}}对于排除交互式故障很有用。
从 'v1.18' 版本开始,'kubectl' 有一个可以创建用于调试的临时容器的 alpha 命令。

<!--
### Example debugging using ephemeral containers {#ephemeral-container-example}

The examples in this section require the `EphemeralContainers` [feature gate](
/docs/reference/command-line-tools-reference/feature-gates/) enabled in your
cluster and `kubectl` version v1.18 or later.
cluster and `kubectl` version v1.22 or later.

You can use the `kubectl debug` command to add ephemeral containers to a
running Pod. First, create a pod for the example:
Expand All @@ -151,7 +148,7 @@ images.
{{< note >}}
本示例需要你的集群已经开启 `EphemeralContainers`
[特性门控](/zh/docs/reference/command-line-tools-reference/feature-gates/),
`kubectl` 版本为 v1.18 或者更高。
`kubectl` 版本为 v1.22 或者更高。
{{< /note >}}

你可以使用 `kubectl debug` 命令来给正在运行中的 Pod 增加一个临时容器。
Expand Down Expand Up @@ -224,7 +221,7 @@ creates.
The `--target` parameter must be supported by the {{< glossary_tooltip
text="Container Runtime" term_id="container-runtime" >}}. When not supported,
the Ephemeral Container may not be started, or it may be started with an
isolated process namespace.
isolated process namespace so that `ps` does not reveal processes in other containers.

You can view the state of the newly created ephemeral container using `kubectl describe`:
-->
Expand All @@ -234,7 +231,8 @@ You can view the state of the newly created ephemeral container using `kubectl d

{{< note >}}
{{< glossary_tooltip text="容器运行时" term_id="container-runtime" >}}必须支持`--target`参数。
如果不支持,则临时容器可能不会启动,或者可能使用隔离的进程命名空间启动。
如果不支持,则临时容器可能不会启动,或者可能使用隔离的进程命名空间启动,
以便 `ps` 不显示其他容器内的进程。
{{< /note >}}

你可以使用 `kubectl describe` 查看新创建的临时容器的状态:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -54,11 +54,11 @@ the container starts.
kubectl create -f https://k8s.io/examples/debug/termination.yaml
```

<!--In the YAML file, in the `cmd` and `args` fields, you can see that the
<!--In the YAML file, in the `command` and `args` fields, you can see that the
container sleeps for 10 seconds and then writes "Sleep expired" to
the `/dev/termination-log` file. After the container writes
the "Sleep expired" message, it terminates.-->
YAML 文件中,在 `cmd` 和 `args` 字段,你可以看到容器休眠 10 秒然后将 "Sleep expired"
YAML 文件中,在 `command` 和 `args` 字段,你可以看到容器休眠 10 秒然后将 "Sleep expired"
写入 `/dev/termination-log` 文件。
容器写完 "Sleep expired" 消息后就终止了。

Expand Down
10 changes: 5 additions & 5 deletions content/zh/docs/tasks/run-application/delete-stateful-set.md
Original file line number Diff line number Diff line change
Expand Up @@ -66,21 +66,21 @@ kubectl delete service <服务名称>
```

<!--
When deleting a StatefulSet through `kubectl`, the StatefulSet scales down to 0. All Pods that are part of this workload are also deleted. If you want to delete only the StatefulSet and not the Pods, use `--cascade=false`.
When deleting a StatefulSet through `kubectl`, the StatefulSet scales down to 0. All Pods that are part of this workload are also deleted. If you want to delete only the StatefulSet and not the Pods, use `--cascade=orphan`.
For example:
--->
当通过 `kubectl` 删除 StatefulSet 时,StatefulSet 会被缩容为 0。
属于该 StatefulSet 的所有 Pod 也被删除。
如果你只想删除 StatefulSet 而不删除 Pod,使用 `--cascade=false`。
如果你只想删除 StatefulSet 而不删除 Pod,使用 `--cascade=orphan`。

```shell
kubectl delete -f <file.yaml> --cascade=false
kubectl delete -f <file.yaml> --cascade=orphan
```

<!--
By passing `--cascade=false` to `kubectl delete`, the Pods managed by the StatefulSet are left behind even after the StatefulSet object itself is deleted. If the pods have a label `app=myapp`, you can then delete them as follows:
By passing `--cascade=orphan` to `kubectl delete`, the Pods managed by the StatefulSet are left behind even after the StatefulSet object itself is deleted. If the pods have a label `app=myapp`, you can then delete them as follows:
--->
通过将 `--cascade=false` 传递给 `kubectl delete`,在删除 StatefulSet 对象之后,
通过将 `--cascade=orphan` 传递给 `kubectl delete`,在删除 StatefulSet 对象之后,
StatefulSet 管理的 Pod 会被保留下来。如果 Pod 具有标签 `app=myapp`,则可以按照
如下方式删除它们:

Expand Down
32 changes: 17 additions & 15 deletions content/zh/docs/tasks/run-application/horizontal-pod-autoscale.md
Original file line number Diff line number Diff line change
Expand Up @@ -367,27 +367,29 @@ The detailed documentation of `kubectl autoscale` can be found [here](/docs/refe
<!--
## Autoscaling during rolling update

Currently in Kubernetes, it is possible to perform a rolling update by using the deployment object,
which manages the underlying replica sets for you.
Horizontal Pod Autoscaler only supports the latter approach: the Horizontal Pod Autoscaler is bound to the deployment object,
it sets the size for the deployment object, and the deployment is responsible for setting sizes of underlying replica sets.
Kubernetes lets you perform a rolling update on a Deployment. In that
case, the Deployment manages the underlying ReplicaSets for you.
When you configure autoscaling for a Deployment, you bind a
HorizontalPodAutoscaler to a single Deployment. The HorizontalPodAutoscaler
manages the `replicas` field of the Deployment. The deployment controller is responsible
for setting the `replicas` of the underlying ReplicaSets so that they add up to a suitable
number during the rollout and also afterwards.
-->
## 滚动升级时扩缩 {#autoscaling-during-rolling-update}

目前在 Kubernetes 中,可以针对 ReplicationController 或 Deployment 执行
滚动更新,它们会为你管理底层副本数。
Pod 水平扩缩只支持后一种:HPA 会被绑定到 Deployment 对象,
HPA 设置副本数量时,Deployment 会设置底层副本数。
Kubernetes 允许你在 Deployment 上执行滚动更新。在这种情况下,Deployment 为你管理下层的 ReplicaSet。
当你为一个 Deployment 配置自动扩缩时,你要为每个 Deployment 绑定一个 HorizontalPodAutoscaler。
HorizontalPodAutoscaler 管理 Deployment 的 `replicas` 字段。
Deployment Controller 负责设置下层 ReplicaSet 的 `replicas` 字段,
以便确保在上线及后续过程副本个数合适。

<!--
Horizontal Pod Autoscaler does not work with rolling update using direct manipulation of replication controllers,
i.e. you cannot bind a Horizontal Pod Autoscaler to a replication controller and do rolling update.
The reason this doesn't work is that when rolling update creates a new replication controller,
the Horizontal Pod Autoscaler will not be bound to the new replication controller.
If you perform a rolling update of a StatefulSet that has an autoscaled number of
replicas, the StatefulSet directly manages its set of Pods (there is no intermediate resource
similar to ReplicaSet).
-->
通过直接操控副本控制器执行滚动升级时,HPA 不能工作,
也就是说你不能将 HPA 绑定到某个 RC 再执行滚动升级。
HPA 不能工作的原因是它无法绑定到滚动更新时所新创建的副本控制器。
如果你对一个副本个数被自动扩缩的 StatefulSet 执行滚动更新, 该 StatefulSet
会直接管理它的 Pod 集合 (不存在类似 ReplicaSet 这样的中间资源)。

<!--
## Support for cooldown/delay
Expand Down