Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

en,zh: Bump tidb components to v6.5.0 #2148

Merged
merged 1 commit into from
Dec 29, 2022
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion en/access-dashboard.md
Original file line number Diff line number Diff line change
Expand Up @@ -238,7 +238,7 @@ To enable this feature, you need to deploy TidbNGMonitoring CR using TiDB Operat
ngMonitoring:
requests:
storage: 10Gi
version: v6.1.0
version: v6.5.0
# storageClassName: default
baseImage: pingcap/ng-monitoring
```
Expand Down
6 changes: 3 additions & 3 deletions en/advanced-statefulset.md
Original file line number Diff line number Diff line change
Expand Up @@ -95,7 +95,7 @@ kind: TidbCluster
metadata:
name: asts
spec:
version: v6.1.0
version: v6.5.0
timezone: UTC
pvReclaimPolicy: Delete
pd:
Expand Down Expand Up @@ -147,7 +147,7 @@ metadata:
tikv.tidb.pingcap.com/delete-slots: '[1]'
name: asts
spec:
version: v6.1.0
version: v6.5.0
timezone: UTC
pvReclaimPolicy: Delete
pd:
Expand Down Expand Up @@ -201,7 +201,7 @@ metadata:
tikv.tidb.pingcap.com/delete-slots: '[]'
name: asts
spec:
version: v6.1.0
version: v6.5.0
timezone: UTC
pvReclaimPolicy: Delete
pd:
Expand Down
2 changes: 1 addition & 1 deletion en/aggregate-multiple-cluster-monitor-data.md
Original file line number Diff line number Diff line change
Expand Up @@ -170,7 +170,7 @@ spec:
version: 7.5.11
initializer:
baseImage: registry.cn-beijing.aliyuncs.com/tidb/tidb-monitor-initializer
version: v6.1.0
version: v6.5.0
reloader:
baseImage: registry.cn-beijing.aliyuncs.com/tidb/tidb-monitor-reloader
version: v1.0.1
Expand Down
8 changes: 4 additions & 4 deletions en/backup-restore-cr.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,10 +20,10 @@ This section introduces the fields in the `Backup` CR.

- When using BR for backup, you can specify the BR version in this field.
- If the field is not specified or the value is empty, the `pingcap/br:${tikv_version}` image is used for backup by default.
- If the BR version is specified in this field, such as `.spec.toolImage: pingcap/br:v6.1.0`, the image of the specified version is used for backup.
- If the BR version is specified in this field, such as `.spec.toolImage: pingcap/br:v6.5.0`, the image of the specified version is used for backup.
- If an image is specified without the version, such as `.spec.toolImage: private/registry/br`, the `private/registry/br:${tikv_version}` image is used for backup.
- When using Dumpling for backup, you can specify the Dumpling version in this field.
- If the Dumpling version is specified in this field, such as `spec.toolImage: pingcap/dumpling:v6.1.0`, the image of the specified version is used for backup.
- If the Dumpling version is specified in this field, such as `spec.toolImage: pingcap/dumpling:v6.5.0`, the image of the specified version is used for backup.
- If the field is not specified, the Dumpling version specified in `TOOLKIT_VERSION` of the [Backup Manager Dockerfile](https://github.com/pingcap/tidb-operator/blob/master/images/tidb-backup-manager/Dockerfile) is used for backup by default.

* `.spec.backupType`: the backup type. This field is valid only when you use BR for backup. Currently, the following three types are supported, and this field can be combined with the `.spec.tableFilter` field to configure table filter rules:
Expand Down Expand Up @@ -255,8 +255,8 @@ This section introduces the fields in the `Restore` CR.
* `.spec.metadata.namespace`: the namespace where the `Restore` CR is located.
* `.spec.toolImage`:the tools image used by `Restore`. TiDB Operator supports this configuration starting from v1.1.9.

- When using BR for restoring, you can specify the BR version in this field. For example,`spec.toolImage: pingcap/br:v6.1.0`. If not specified, `pingcap/br:${tikv_version}` is used for restoring by default.
- When using Lightning for restoring, you can specify the Lightning version in this field. For example, `spec.toolImage: pingcap/lightning:v6.1.0`. If not specified, the Lightning version specified in `TOOLKIT_VERSION` of the [Backup Manager Dockerfile](https://github.com/pingcap/tidb-operator/blob/master/images/tidb-backup-manager/Dockerfile) is used for restoring by default.
- When using BR for restoring, you can specify the BR version in this field. For example,`spec.toolImage: pingcap/br:v6.5.0`. If not specified, `pingcap/br:${tikv_version}` is used for restoring by default.
- When using Lightning for restoring, you can specify the Lightning version in this field. For example, `spec.toolImage: pingcap/lightning:v6.5.0`. If not specified, the Lightning version specified in `TOOLKIT_VERSION` of the [Backup Manager Dockerfile](https://github.com/pingcap/tidb-operator/blob/master/images/tidb-backup-manager/Dockerfile) is used for restoring by default.

* `.spec.backupType`: the restore type. This field is valid only when you use BR to restore data. Currently, the following three types are supported, and this field can be combined with the `.spec.tableFilter` field to configure table filter rules:
* `full`: restore all databases in a TiDB cluster.
Expand Down
4 changes: 2 additions & 2 deletions en/configure-a-tidb-cluster.md
Original file line number Diff line number Diff line change
Expand Up @@ -41,11 +41,11 @@ Usually, components in a cluster are in the same version. It is recommended to c

Here are the formats of the parameters:

- `spec.version`: the format is `imageTag`, such as `v6.1.0`
- `spec.version`: the format is `imageTag`, such as `v6.5.0`

- `spec.<pd/tidb/tikv/pump/tiflash/ticdc>.baseImage`: the format is `imageName`, such as `pingcap/tidb`

- `spec.<pd/tidb/tikv/pump/tiflash/ticdc>.version`: the format is `imageTag`, such as `v6.1.0`
- `spec.<pd/tidb/tikv/pump/tiflash/ticdc>.version`: the format is `imageTag`, such as `v6.5.0`

### Recommended configuration

Expand Down
2 changes: 1 addition & 1 deletion en/deploy-cluster-on-arm64.md
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@ Before starting the process, make sure that Kubernetes clusters are deployed on
name: ${cluster_name}
namespace: ${cluster_namespace}
spec:
version: "v6.1.0"
version: "v6.5.0"
# ...
helper:
image: busybox:1.33.0
Expand Down
6 changes: 3 additions & 3 deletions en/deploy-heterogeneous-tidb-cluster.md
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,7 @@ To deploy a heterogeneous cluster, do the following:
name: ${heterogeneous_cluster_name}
spec:
configUpdateStrategy: RollingUpdate
version: v6.1.0
version: v6.5.0
timezone: UTC
pvReclaimPolicy: Delete
discovery: {}
Expand Down Expand Up @@ -127,7 +127,7 @@ After creating certificates, take the following steps to deploy a TLS-enabled he
tlsCluster:
enabled: true
configUpdateStrategy: RollingUpdate
version: v6.1.0
version: v6.5.0
timezone: UTC
pvReclaimPolicy: Delete
discovery: {}
Expand Down Expand Up @@ -216,7 +216,7 @@ If you need to deploy a monitoring component for a heterogeneous cluster, take t
version: 7.5.11
initializer:
baseImage: pingcap/tidb-monitor-initializer
version: v6.1.0
version: v6.5.0
reloader:
baseImage: pingcap/tidb-monitor-reloader
version: v1.0.1
Expand Down
58 changes: 29 additions & 29 deletions en/deploy-on-general-kubernetes.md
Original file line number Diff line number Diff line change
Expand Up @@ -42,17 +42,17 @@ This document describes how to deploy a TiDB cluster on general Kubernetes.

If the server does not have an external network, you need to download the Docker image used by the TiDB cluster on a machine with Internet access and upload it to the server, and then use `docker load` to install the Docker image on the server.

To deploy a TiDB cluster, you need the following Docker images (assuming the version of the TiDB cluster is v6.1.0):
To deploy a TiDB cluster, you need the following Docker images (assuming the version of the TiDB cluster is v6.5.0):

```shell
pingcap/pd:v6.1.0
pingcap/tikv:v6.1.0
pingcap/tidb:v6.1.0
pingcap/tidb-binlog:v6.1.0
pingcap/ticdc:v6.1.0
pingcap/tiflash:v6.1.0
pingcap/pd:v6.5.0
pingcap/tikv:v6.5.0
pingcap/tidb:v6.5.0
pingcap/tidb-binlog:v6.5.0
pingcap/ticdc:v6.5.0
pingcap/tiflash:v6.5.0
pingcap/tidb-monitor-reloader:v1.0.1
pingcap/tidb-monitor-initializer:v6.1.0
pingcap/tidb-monitor-initializer:v6.5.0
grafana/grafana:6.0.1
prom/prometheus:v2.18.1
busybox:1.26.2
Expand All @@ -63,26 +63,26 @@ This document describes how to deploy a TiDB cluster on general Kubernetes.
{{< copyable "shell-regular" >}}

```shell
docker pull pingcap/pd:v6.1.0
docker pull pingcap/tikv:v6.1.0
docker pull pingcap/tidb:v6.1.0
docker pull pingcap/tidb-binlog:v6.1.0
docker pull pingcap/ticdc:v6.1.0
docker pull pingcap/tiflash:v6.1.0
docker pull pingcap/pd:v6.5.0
docker pull pingcap/tikv:v6.5.0
docker pull pingcap/tidb:v6.5.0
docker pull pingcap/tidb-binlog:v6.5.0
docker pull pingcap/ticdc:v6.5.0
docker pull pingcap/tiflash:v6.5.0
docker pull pingcap/tidb-monitor-reloader:v1.0.1
docker pull pingcap/tidb-monitor-initializer:v6.1.0
docker pull pingcap/tidb-monitor-initializer:v6.5.0
docker pull grafana/grafana:6.0.1
docker pull prom/prometheus:v2.18.1
docker pull busybox:1.26.2

docker save -o pd-v6.1.0.tar pingcap/pd:v6.1.0
docker save -o tikv-v6.1.0.tar pingcap/tikv:v6.1.0
docker save -o tidb-v6.1.0.tar pingcap/tidb:v6.1.0
docker save -o tidb-binlog-v6.1.0.tar pingcap/tidb-binlog:v6.1.0
docker save -o ticdc-v6.1.0.tar pingcap/ticdc:v6.1.0
docker save -o tiflash-v6.1.0.tar pingcap/tiflash:v6.1.0
docker save -o pd-v6.5.0.tar pingcap/pd:v6.5.0
docker save -o tikv-v6.5.0.tar pingcap/tikv:v6.5.0
docker save -o tidb-v6.5.0.tar pingcap/tidb:v6.5.0
docker save -o tidb-binlog-v6.5.0.tar pingcap/tidb-binlog:v6.5.0
docker save -o ticdc-v6.5.0.tar pingcap/ticdc:v6.5.0
docker save -o tiflash-v6.5.0.tar pingcap/tiflash:v6.5.0
docker save -o tidb-monitor-reloader-v1.0.1.tar pingcap/tidb-monitor-reloader:v1.0.1
docker save -o tidb-monitor-initializer-v6.1.0.tar pingcap/tidb-monitor-initializer:v6.1.0
docker save -o tidb-monitor-initializer-v6.5.0.tar pingcap/tidb-monitor-initializer:v6.5.0
docker save -o grafana-6.0.1.tar grafana/grafana:6.0.1
docker save -o prometheus-v2.18.1.tar prom/prometheus:v2.18.1
docker save -o busybox-1.26.2.tar busybox:1.26.2
Expand All @@ -93,14 +93,14 @@ This document describes how to deploy a TiDB cluster on general Kubernetes.
{{< copyable "shell-regular" >}}

```shell
docker load -i pd-v6.1.0.tar
docker load -i tikv-v6.1.0.tar
docker load -i tidb-v6.1.0.tar
docker load -i tidb-binlog-v6.1.0.tar
docker load -i ticdc-v6.1.0.tar
docker load -i tiflash-v6.1.0.tar
docker load -i pd-v6.5.0.tar
docker load -i tikv-v6.5.0.tar
docker load -i tidb-v6.5.0.tar
docker load -i tidb-binlog-v6.5.0.tar
docker load -i ticdc-v6.5.0.tar
docker load -i tiflash-v6.5.0.tar
docker load -i tidb-monitor-reloader-v1.0.1.tar
docker load -i tidb-monitor-initializer-v6.1.0.tar
docker load -i tidb-monitor-initializer-v6.5.0.tar
docker load -i grafana-6.0.1.tar
docker load -i prometheus-v2.18.1.tar
docker load -i busybox-1.26.2.tar
Expand Down
6 changes: 3 additions & 3 deletions en/deploy-tidb-binlog.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ TiDB Binlog is disabled in the TiDB cluster by default. To create a TiDB cluster
...
pump:
baseImage: pingcap/tidb-binlog
version: v6.1.0
version: v6.5.0
replicas: 1
storageClassName: local-storage
requests:
Expand All @@ -47,7 +47,7 @@ TiDB Binlog is disabled in the TiDB cluster by default. To create a TiDB cluster
...
pump:
baseImage: pingcap/tidb-binlog
version: v6.1.0
version: v6.5.0
replicas: 1
storageClassName: local-storage
requests:
Expand Down Expand Up @@ -188,7 +188,7 @@ To deploy multiple drainers using the `tidb-drainer` Helm chart for a TiDB clust

```yaml
clusterName: example-tidb
clusterVersion: v6.1.0
clusterVersion: v6.5.0
baseImage:pingcap/tidb-binlog
storageClassName: local-storage
storage: 10Gi
Expand Down
8 changes: 4 additions & 4 deletions en/deploy-tidb-cluster-across-multiple-kubernetes.md
Original file line number Diff line number Diff line change
Expand Up @@ -52,7 +52,7 @@ kind: TidbCluster
metadata:
name: "${tc_name_1}"
spec:
version: v6.1.0
version: v6.5.0
timezone: UTC
pvReclaimPolicy: Delete
enableDynamicConfiguration: true
Expand Down Expand Up @@ -106,7 +106,7 @@ kind: TidbCluster
metadata:
name: "${tc_name_2}"
spec:
version: v6.1.0
version: v6.5.0
timezone: UTC
pvReclaimPolicy: Delete
enableDynamicConfiguration: true
Expand Down Expand Up @@ -383,7 +383,7 @@ kind: TidbCluster
metadata:
name: "${tc_name_1}"
spec:
version: v6.1.0
version: v6.5.0
timezone: UTC
tlsCluster:
enabled: true
Expand Down Expand Up @@ -441,7 +441,7 @@ kind: TidbCluster
metadata:
name: "${tc_name_2}"
spec:
version: v6.1.0
version: v6.5.0
timezone: UTC
tlsCluster:
enabled: true
Expand Down
16 changes: 8 additions & 8 deletions en/deploy-tidb-dm.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,9 +29,9 @@ Usually, components in a cluster are in the same version. It is recommended to c

The formats of the related parameters are as follows:

- `spec.version`: the format is `imageTag`, such as `v6.1.0`.
- `spec.version`: the format is `imageTag`, such as `v6.5.0`.
- `spec.<master/worker>.baseImage`: the format is `imageName`, such as `pingcap/dm`.
- `spec.<master/worker>.version`: the format is `imageTag`, such as `v6.1.0`.
- `spec.<master/worker>.version`: the format is `imageTag`, such as `v6.5.0`.

TiDB Operator only supports deploying DM 2.0 and later versions.

Expand All @@ -50,7 +50,7 @@ metadata:
name: ${dm_cluster_name}
namespace: ${namespace}
spec:
version: v6.1.0
version: v6.5.0
configUpdateStrategy: RollingUpdate
pvReclaimPolicy: Retain
discovery: {}
Expand Down Expand Up @@ -141,27 +141,27 @@ kubectl apply -f ${dm_cluster_name}.yaml -n ${namespace}

If the server does not have an external network, you need to download the Docker image used by the DM cluster and upload the image to the server, and then execute `docker load` to install the Docker image on the server:

1. Deploy a DM cluster requires the following Docker image (assuming the version of the DM cluster is v6.1.0):
1. Deploy a DM cluster requires the following Docker image (assuming the version of the DM cluster is v6.5.0):

```shell
pingcap/dm:v6.1.0
pingcap/dm:v6.5.0
```

2. To download the image, execute the following command:

{{< copyable "shell-regular" >}}

```shell
docker pull pingcap/dm:v6.1.0
docker save -o dm-v6.1.0.tar pingcap/dm:v6.1.0
docker pull pingcap/dm:v6.5.0
docker save -o dm-v6.5.0.tar pingcap/dm:v6.5.0
```

3. Upload the Docker image to the server, and execute `docker load` to install the image on the server:

{{< copyable "shell-regular" >}}

```shell
docker load -i dm-v6.1.0.tar
docker load -i dm-v6.5.0.tar
```

After deploying the DM cluster, execute the following command to view the Pod status:
Expand Down
2 changes: 1 addition & 1 deletion en/deploy-tidb-monitor-across-multiple-kubernetes.md
Original file line number Diff line number Diff line change
Expand Up @@ -302,7 +302,7 @@ After collecting data using Prometheus, you can visualize multi-cluster monitori

```shell
# set tidb version here
version=v6.1.0
version=v6.5.0
docker run --rm -i -v ${PWD}/dashboards:/dashboards/ pingcap/tidb-monitor-initializer:${version} && \
cd dashboards
```
Expand Down
4 changes: 2 additions & 2 deletions en/enable-tls-between-components.md
Original file line number Diff line number Diff line change
Expand Up @@ -1337,7 +1337,7 @@ In this step, you need to perform the following operations:
spec:
tlsCluster:
enabled: true
version: v6.1.0
version: v6.5.0
timezone: UTC
pvReclaimPolicy: Retain
pd:
Expand Down Expand Up @@ -1396,7 +1396,7 @@ In this step, you need to perform the following operations:
version: 7.5.11
initializer:
baseImage: pingcap/tidb-monitor-initializer
version: v6.1.0
version: v6.5.0
reloader:
baseImage: pingcap/tidb-monitor-reloader
version: v1.0.1
Expand Down
4 changes: 2 additions & 2 deletions en/enable-tls-for-dm.md
Original file line number Diff line number Diff line change
Expand Up @@ -518,7 +518,7 @@ metadata:
spec:
tlsCluster:
enabled: true
version: v6.1.0
version: v6.5.0
pvReclaimPolicy: Retain
discovery: {}
master:
Expand Down Expand Up @@ -588,7 +588,7 @@ metadata:
name: ${cluster_name}
namespace: ${namespace}
spec:
version: v6.1.0
version: v6.5.0
pvReclaimPolicy: Retain
discovery: {}
tlsClientSecretNames:
Expand Down
2 changes: 1 addition & 1 deletion en/enable-tls-for-mysql-client.md
Original file line number Diff line number Diff line change
Expand Up @@ -554,7 +554,7 @@ In this step, you create a TiDB cluster and perform the following operations:
name: ${cluster_name}
namespace: ${namespace}
spec:
version: v6.1.0
version: v6.5.0
timezone: UTC
pvReclaimPolicy: Retain
pd:
Expand Down
Loading