diff --git a/en/access-dashboard.md b/en/access-dashboard.md index ee63f051ad..f79c6d3205 100644 --- a/en/access-dashboard.md +++ b/en/access-dashboard.md @@ -237,7 +237,7 @@ To enable this feature, you need to deploy TidbNGMonitoring CR using TiDB Operat ngMonitoring: requests: storage: 10Gi - version: v6.1.0 + version: v6.5.0 # storageClassName: default baseImage: pingcap/ng-monitoring ``` diff --git a/en/advanced-statefulset.md b/en/advanced-statefulset.md index d2a4a4cd8d..fc7a744951 100644 --- a/en/advanced-statefulset.md +++ b/en/advanced-statefulset.md @@ -94,7 +94,7 @@ kind: TidbCluster metadata: name: asts spec: - version: v6.1.0 + version: v6.5.0 timezone: UTC pvReclaimPolicy: Delete pd: @@ -146,7 +146,7 @@ metadata: tikv.tidb.pingcap.com/delete-slots: '[1]' name: asts spec: - version: v6.1.0 + version: v6.5.0 timezone: UTC pvReclaimPolicy: Delete pd: @@ -200,7 +200,7 @@ metadata: tikv.tidb.pingcap.com/delete-slots: '[]' name: asts spec: - version: v6.1.0 + version: v6.5.0 timezone: UTC pvReclaimPolicy: Delete pd: diff --git a/en/aggregate-multiple-cluster-monitor-data.md b/en/aggregate-multiple-cluster-monitor-data.md index cf6f137e7d..df6c0954c7 100644 --- a/en/aggregate-multiple-cluster-monitor-data.md +++ b/en/aggregate-multiple-cluster-monitor-data.md @@ -170,7 +170,7 @@ spec: version: 7.5.11 initializer: baseImage: registry.cn-beijing.aliyuncs.com/tidb/tidb-monitor-initializer - version: v6.1.0 + version: v6.5.0 reloader: baseImage: registry.cn-beijing.aliyuncs.com/tidb/tidb-monitor-reloader version: v1.0.1 diff --git a/en/backup-restore-cr.md b/en/backup-restore-cr.md index db3a275b13..c057aa5ece 100644 --- a/en/backup-restore-cr.md +++ b/en/backup-restore-cr.md @@ -20,10 +20,10 @@ This section introduces the fields in the `Backup` CR. - When using BR for backup, you can specify the BR version in this field. - If the field is not specified or the value is empty, the `pingcap/br:${tikv_version}` image is used for backup by default. - - If the BR version is specified in this field, such as `.spec.toolImage: pingcap/br:v6.1.0`, the image of the specified version is used for backup. + - If the BR version is specified in this field, such as `.spec.toolImage: pingcap/br:v6.5.0`, the image of the specified version is used for backup. - If an image is specified without the version, such as `.spec.toolImage: private/registry/br`, the `private/registry/br:${tikv_version}` image is used for backup. - When using Dumpling for backup, you can specify the Dumpling version in this field. - - If the Dumpling version is specified in this field, such as `spec.toolImage: pingcap/dumpling:v6.1.0`, the image of the specified version is used for backup. + - If the Dumpling version is specified in this field, such as `spec.toolImage: pingcap/dumpling:v6.5.0`, the image of the specified version is used for backup. - If the field is not specified, the Dumpling version specified in `TOOLKIT_VERSION` of the [Backup Manager Dockerfile](https://github.com/pingcap/tidb-operator/blob/master/images/tidb-backup-manager/Dockerfile) is used for backup by default. * `.spec.backupType`: the backup type. This field is valid only when you use BR for backup. Currently, the following three types are supported, and this field can be combined with the `.spec.tableFilter` field to configure table filter rules: @@ -255,8 +255,8 @@ This section introduces the fields in the `Restore` CR. * `.spec.metadata.namespace`: the namespace where the `Restore` CR is located. * `.spec.toolImage`:the tools image used by `Restore`. TiDB Operator supports this configuration starting from v1.1.9. - - When using BR for restoring, you can specify the BR version in this field. For example,`spec.toolImage: pingcap/br:v6.1.0`. If not specified, `pingcap/br:${tikv_version}` is used for restoring by default. - - When using Lightning for restoring, you can specify the Lightning version in this field. For example, `spec.toolImage: pingcap/lightning:v6.1.0`. If not specified, the Lightning version specified in `TOOLKIT_VERSION` of the [Backup Manager Dockerfile](https://github.com/pingcap/tidb-operator/blob/master/images/tidb-backup-manager/Dockerfile) is used for restoring by default. + - When using BR for restoring, you can specify the BR version in this field. For example,`spec.toolImage: pingcap/br:v6.5.0`. If not specified, `pingcap/br:${tikv_version}` is used for restoring by default. + - When using Lightning for restoring, you can specify the Lightning version in this field. For example, `spec.toolImage: pingcap/lightning:v6.5.0`. If not specified, the Lightning version specified in `TOOLKIT_VERSION` of the [Backup Manager Dockerfile](https://github.com/pingcap/tidb-operator/blob/master/images/tidb-backup-manager/Dockerfile) is used for restoring by default. * `.spec.backupType`: the restore type. This field is valid only when you use BR to restore data. Currently, the following three types are supported, and this field can be combined with the `.spec.tableFilter` field to configure table filter rules: * `full`: restore all databases in a TiDB cluster. diff --git a/en/configure-a-tidb-cluster.md b/en/configure-a-tidb-cluster.md index 7b5db6bc2c..8688e985e2 100644 --- a/en/configure-a-tidb-cluster.md +++ b/en/configure-a-tidb-cluster.md @@ -40,11 +40,11 @@ Usually, components in a cluster are in the same version. It is recommended to c Here are the formats of the parameters: -- `spec.version`: the format is `imageTag`, such as `v6.1.0` +- `spec.version`: the format is `imageTag`, such as `v6.5.0` - `spec..baseImage`: the format is `imageName`, such as `pingcap/tidb` -- `spec..version`: the format is `imageTag`, such as `v6.1.0` +- `spec..version`: the format is `imageTag`, such as `v6.5.0` ### Recommended configuration diff --git a/en/deploy-cluster-on-arm64.md b/en/deploy-cluster-on-arm64.md index c356b152dd..3786edc44e 100644 --- a/en/deploy-cluster-on-arm64.md +++ b/en/deploy-cluster-on-arm64.md @@ -38,7 +38,7 @@ Before starting the process, make sure that Kubernetes clusters are deployed on name: ${cluster_name} namespace: ${cluster_namespace} spec: - version: "v6.1.0" + version: "v6.5.0" # ... helper: image: busybox:1.33.0 diff --git a/en/deploy-heterogeneous-tidb-cluster.md b/en/deploy-heterogeneous-tidb-cluster.md index 8511350229..1a3ae21784 100644 --- a/en/deploy-heterogeneous-tidb-cluster.md +++ b/en/deploy-heterogeneous-tidb-cluster.md @@ -47,7 +47,7 @@ To deploy a heterogeneous cluster, do the following: name: ${heterogeneous_cluster_name} spec: configUpdateStrategy: RollingUpdate - version: v6.1.0 + version: v6.5.0 timezone: UTC pvReclaimPolicy: Delete discovery: {} @@ -127,7 +127,7 @@ After creating certificates, take the following steps to deploy a TLS-enabled he tlsCluster: enabled: true configUpdateStrategy: RollingUpdate - version: v6.1.0 + version: v6.5.0 timezone: UTC pvReclaimPolicy: Delete discovery: {} @@ -216,7 +216,7 @@ If you need to deploy a monitoring component for a heterogeneous cluster, take t version: 7.5.11 initializer: baseImage: pingcap/tidb-monitor-initializer - version: v6.1.0 + version: v6.5.0 reloader: baseImage: pingcap/tidb-monitor-reloader version: v1.0.1 diff --git a/en/deploy-on-general-kubernetes.md b/en/deploy-on-general-kubernetes.md index 2551b433d4..2b5d28d0b3 100644 --- a/en/deploy-on-general-kubernetes.md +++ b/en/deploy-on-general-kubernetes.md @@ -41,17 +41,17 @@ This document describes how to deploy a TiDB cluster on general Kubernetes. If the server does not have an external network, you need to download the Docker image used by the TiDB cluster on a machine with Internet access and upload it to the server, and then use `docker load` to install the Docker image on the server. - To deploy a TiDB cluster, you need the following Docker images (assuming the version of the TiDB cluster is v6.1.0): + To deploy a TiDB cluster, you need the following Docker images (assuming the version of the TiDB cluster is v6.5.0): ```shell - pingcap/pd:v6.1.0 - pingcap/tikv:v6.1.0 - pingcap/tidb:v6.1.0 - pingcap/tidb-binlog:v6.1.0 - pingcap/ticdc:v6.1.0 - pingcap/tiflash:v6.1.0 + pingcap/pd:v6.5.0 + pingcap/tikv:v6.5.0 + pingcap/tidb:v6.5.0 + pingcap/tidb-binlog:v6.5.0 + pingcap/ticdc:v6.5.0 + pingcap/tiflash:v6.5.0 pingcap/tidb-monitor-reloader:v1.0.1 - pingcap/tidb-monitor-initializer:v6.1.0 + pingcap/tidb-monitor-initializer:v6.5.0 grafana/grafana:6.0.1 prom/prometheus:v2.18.1 busybox:1.26.2 @@ -62,26 +62,26 @@ This document describes how to deploy a TiDB cluster on general Kubernetes. {{< copyable "shell-regular" >}} ```shell - docker pull pingcap/pd:v6.1.0 - docker pull pingcap/tikv:v6.1.0 - docker pull pingcap/tidb:v6.1.0 - docker pull pingcap/tidb-binlog:v6.1.0 - docker pull pingcap/ticdc:v6.1.0 - docker pull pingcap/tiflash:v6.1.0 + docker pull pingcap/pd:v6.5.0 + docker pull pingcap/tikv:v6.5.0 + docker pull pingcap/tidb:v6.5.0 + docker pull pingcap/tidb-binlog:v6.5.0 + docker pull pingcap/ticdc:v6.5.0 + docker pull pingcap/tiflash:v6.5.0 docker pull pingcap/tidb-monitor-reloader:v1.0.1 - docker pull pingcap/tidb-monitor-initializer:v6.1.0 + docker pull pingcap/tidb-monitor-initializer:v6.5.0 docker pull grafana/grafana:6.0.1 docker pull prom/prometheus:v2.18.1 docker pull busybox:1.26.2 - docker save -o pd-v6.1.0.tar pingcap/pd:v6.1.0 - docker save -o tikv-v6.1.0.tar pingcap/tikv:v6.1.0 - docker save -o tidb-v6.1.0.tar pingcap/tidb:v6.1.0 - docker save -o tidb-binlog-v6.1.0.tar pingcap/tidb-binlog:v6.1.0 - docker save -o ticdc-v6.1.0.tar pingcap/ticdc:v6.1.0 - docker save -o tiflash-v6.1.0.tar pingcap/tiflash:v6.1.0 + docker save -o pd-v6.5.0.tar pingcap/pd:v6.5.0 + docker save -o tikv-v6.5.0.tar pingcap/tikv:v6.5.0 + docker save -o tidb-v6.5.0.tar pingcap/tidb:v6.5.0 + docker save -o tidb-binlog-v6.5.0.tar pingcap/tidb-binlog:v6.5.0 + docker save -o ticdc-v6.5.0.tar pingcap/ticdc:v6.5.0 + docker save -o tiflash-v6.5.0.tar pingcap/tiflash:v6.5.0 docker save -o tidb-monitor-reloader-v1.0.1.tar pingcap/tidb-monitor-reloader:v1.0.1 - docker save -o tidb-monitor-initializer-v6.1.0.tar pingcap/tidb-monitor-initializer:v6.1.0 + docker save -o tidb-monitor-initializer-v6.5.0.tar pingcap/tidb-monitor-initializer:v6.5.0 docker save -o grafana-6.0.1.tar grafana/grafana:6.0.1 docker save -o prometheus-v2.18.1.tar prom/prometheus:v2.18.1 docker save -o busybox-1.26.2.tar busybox:1.26.2 @@ -92,14 +92,14 @@ This document describes how to deploy a TiDB cluster on general Kubernetes. {{< copyable "shell-regular" >}} ```shell - docker load -i pd-v6.1.0.tar - docker load -i tikv-v6.1.0.tar - docker load -i tidb-v6.1.0.tar - docker load -i tidb-binlog-v6.1.0.tar - docker load -i ticdc-v6.1.0.tar - docker load -i tiflash-v6.1.0.tar + docker load -i pd-v6.5.0.tar + docker load -i tikv-v6.5.0.tar + docker load -i tidb-v6.5.0.tar + docker load -i tidb-binlog-v6.5.0.tar + docker load -i ticdc-v6.5.0.tar + docker load -i tiflash-v6.5.0.tar docker load -i tidb-monitor-reloader-v1.0.1.tar - docker load -i tidb-monitor-initializer-v6.1.0.tar + docker load -i tidb-monitor-initializer-v6.5.0.tar docker load -i grafana-6.0.1.tar docker load -i prometheus-v2.18.1.tar docker load -i busybox-1.26.2.tar diff --git a/en/deploy-tidb-binlog.md b/en/deploy-tidb-binlog.md index 289041c294..eed332f0f7 100644 --- a/en/deploy-tidb-binlog.md +++ b/en/deploy-tidb-binlog.md @@ -27,7 +27,7 @@ TiDB Binlog is disabled in the TiDB cluster by default. To create a TiDB cluster ... pump: baseImage: pingcap/tidb-binlog - version: v6.1.0 + version: v6.5.0 replicas: 1 storageClassName: local-storage requests: @@ -46,7 +46,7 @@ TiDB Binlog is disabled in the TiDB cluster by default. To create a TiDB cluster ... pump: baseImage: pingcap/tidb-binlog - version: v6.1.0 + version: v6.5.0 replicas: 1 storageClassName: local-storage requests: @@ -187,7 +187,7 @@ To deploy multiple drainers using the `tidb-drainer` Helm chart for a TiDB clust ```yaml clusterName: example-tidb - clusterVersion: v6.1.0 + clusterVersion: v6.5.0 baseImage:pingcap/tidb-binlog storageClassName: local-storage storage: 10Gi diff --git a/en/deploy-tidb-cluster-across-multiple-kubernetes.md b/en/deploy-tidb-cluster-across-multiple-kubernetes.md index b74d8a6122..f2e7669baf 100644 --- a/en/deploy-tidb-cluster-across-multiple-kubernetes.md +++ b/en/deploy-tidb-cluster-across-multiple-kubernetes.md @@ -52,7 +52,7 @@ kind: TidbCluster metadata: name: "${tc_name_1}" spec: - version: v6.1.0 + version: v6.5.0 timezone: UTC pvReclaimPolicy: Delete enableDynamicConfiguration: true @@ -106,7 +106,7 @@ kind: TidbCluster metadata: name: "${tc_name_2}" spec: - version: v6.1.0 + version: v6.5.0 timezone: UTC pvReclaimPolicy: Delete enableDynamicConfiguration: true @@ -383,7 +383,7 @@ kind: TidbCluster metadata: name: "${tc_name_1}" spec: - version: v6.1.0 + version: v6.5.0 timezone: UTC tlsCluster: enabled: true @@ -441,7 +441,7 @@ kind: TidbCluster metadata: name: "${tc_name_2}" spec: - version: v6.1.0 + version: v6.5.0 timezone: UTC tlsCluster: enabled: true diff --git a/en/deploy-tidb-dm.md b/en/deploy-tidb-dm.md index a9e8498896..94f01af549 100644 --- a/en/deploy-tidb-dm.md +++ b/en/deploy-tidb-dm.md @@ -29,9 +29,9 @@ Usually, components in a cluster are in the same version. It is recommended to c The formats of the related parameters are as follows: -- `spec.version`: the format is `imageTag`, such as `v6.1.0`. +- `spec.version`: the format is `imageTag`, such as `v6.5.0`. - `spec..baseImage`: the format is `imageName`, such as `pingcap/dm`. -- `spec..version`: the format is `imageTag`, such as `v6.1.0`. +- `spec..version`: the format is `imageTag`, such as `v6.5.0`. TiDB Operator only supports deploying DM 2.0 and later versions. @@ -50,7 +50,7 @@ metadata: name: ${dm_cluster_name} namespace: ${namespace} spec: - version: v6.1.0 + version: v6.5.0 configUpdateStrategy: RollingUpdate pvReclaimPolicy: Retain discovery: {} @@ -141,10 +141,10 @@ kubectl apply -f ${dm_cluster_name}.yaml -n ${namespace} If the server does not have an external network, you need to download the Docker image used by the DM cluster and upload the image to the server, and then execute `docker load` to install the Docker image on the server: -1. Deploy a DM cluster requires the following Docker image (assuming the version of the DM cluster is v6.1.0): +1. Deploy a DM cluster requires the following Docker image (assuming the version of the DM cluster is v6.5.0): ```shell - pingcap/dm:v6.1.0 + pingcap/dm:v6.5.0 ``` 2. To download the image, execute the following command: @@ -152,8 +152,8 @@ If the server does not have an external network, you need to download the Docker {{< copyable "shell-regular" >}} ```shell - docker pull pingcap/dm:v6.1.0 - docker save -o dm-v6.1.0.tar pingcap/dm:v6.1.0 + docker pull pingcap/dm:v6.5.0 + docker save -o dm-v6.5.0.tar pingcap/dm:v6.5.0 ``` 3. Upload the Docker image to the server, and execute `docker load` to install the image on the server: @@ -161,7 +161,7 @@ If the server does not have an external network, you need to download the Docker {{< copyable "shell-regular" >}} ```shell - docker load -i dm-v6.1.0.tar + docker load -i dm-v6.5.0.tar ``` After deploying the DM cluster, execute the following command to view the Pod status: diff --git a/en/deploy-tidb-monitor-across-multiple-kubernetes.md b/en/deploy-tidb-monitor-across-multiple-kubernetes.md index babcb4cc92..1b4dda45fc 100644 --- a/en/deploy-tidb-monitor-across-multiple-kubernetes.md +++ b/en/deploy-tidb-monitor-across-multiple-kubernetes.md @@ -302,7 +302,7 @@ After collecting data using Prometheus, you can visualize multi-cluster monitori ```shell # set tidb version here - version=v6.1.0 + version=v6.5.0 docker run --rm -i -v ${PWD}/dashboards:/dashboards/ pingcap/tidb-monitor-initializer:${version} && \ cd dashboards ``` diff --git a/en/enable-tls-between-components.md b/en/enable-tls-between-components.md index bc0e869530..de65d190e0 100644 --- a/en/enable-tls-between-components.md +++ b/en/enable-tls-between-components.md @@ -1336,7 +1336,7 @@ In this step, you need to perform the following operations: spec: tlsCluster: enabled: true - version: v6.1.0 + version: v6.5.0 timezone: UTC pvReclaimPolicy: Retain pd: @@ -1395,7 +1395,7 @@ In this step, you need to perform the following operations: version: 7.5.11 initializer: baseImage: pingcap/tidb-monitor-initializer - version: v6.1.0 + version: v6.5.0 reloader: baseImage: pingcap/tidb-monitor-reloader version: v1.0.1 diff --git a/en/enable-tls-for-dm.md b/en/enable-tls-for-dm.md index accf639a39..659d9042da 100644 --- a/en/enable-tls-for-dm.md +++ b/en/enable-tls-for-dm.md @@ -518,7 +518,7 @@ metadata: spec: tlsCluster: enabled: true - version: v6.1.0 + version: v6.5.0 pvReclaimPolicy: Retain discovery: {} master: @@ -588,7 +588,7 @@ metadata: name: ${cluster_name} namespace: ${namespace} spec: - version: v6.1.0 + version: v6.5.0 pvReclaimPolicy: Retain discovery: {} tlsClientSecretNames: diff --git a/en/enable-tls-for-mysql-client.md b/en/enable-tls-for-mysql-client.md index 0751d04d0e..cedc77a123 100644 --- a/en/enable-tls-for-mysql-client.md +++ b/en/enable-tls-for-mysql-client.md @@ -553,7 +553,7 @@ In this step, you create a TiDB cluster and perform the following operations: name: ${cluster_name} namespace: ${namespace} spec: - version: v6.1.0 + version: v6.5.0 timezone: UTC pvReclaimPolicy: Retain pd: diff --git a/en/get-started.md b/en/get-started.md index dec3da9f83..18d2984b09 100644 --- a/en/get-started.md +++ b/en/get-started.md @@ -495,10 +495,10 @@ APPROXIMATE_KEYS: 0 ```sql mysql> select tidb_version()\G *************************** 1. row *************************** - tidb_version(): Release Version: v6.1.0 + tidb_version(): Release Version: v6.5.0 Edition: Community Git Commit Hash: 4a1b2e9fe5b5afb1068c56de47adb07098d768d6 - Git Branch: heads/refs/tags/v6.1.0 + Git Branch: heads/refs/tags/v6.5.0 UTC Build Time: 2021-11-24 13:32:39 GoVersion: go1.16.4 Race Enabled: false @@ -694,7 +694,7 @@ Note that `nightly` is not a fixed version. Running the command above at a diffe ``` *************************** 1. row *************************** -tidb_version(): Release Version: v6.1.0-alpha-445-g778e188fa +tidb_version(): Release Version: v6.5.0-alpha-445-g778e188fa Edition: Community Git Commit Hash: 778e188fa7af4f48497ff9e05ca6681bf9a5fa16 Git Branch: master diff --git a/en/monitor-a-tidb-cluster.md b/en/monitor-a-tidb-cluster.md index 88eaf5f4d2..e27a14e2b5 100644 --- a/en/monitor-a-tidb-cluster.md +++ b/en/monitor-a-tidb-cluster.md @@ -50,7 +50,7 @@ spec: type: NodePort initializer: baseImage: pingcap/tidb-monitor-initializer - version: v6.1.0 + version: v6.5.0 reloader: baseImage: pingcap/tidb-monitor-reloader version: v1.0.1 @@ -172,7 +172,7 @@ spec: type: NodePort initializer: baseImage: pingcap/tidb-monitor-initializer - version: v6.1.0 + version: v6.5.0 reloader: baseImage: pingcap/tidb-monitor-reloader version: v1.0.1 @@ -231,7 +231,7 @@ spec: foo: "bar" initializer: baseImage: pingcap/tidb-monitor-initializer - version: v6.1.0 + version: v6.5.0 reloader: baseImage: pingcap/tidb-monitor-reloader version: v1.0.1 @@ -273,7 +273,7 @@ spec: type: ClusterIP initializer: baseImage: pingcap/tidb-monitor-initializer - version: v6.1.0 + version: v6.5.0 reloader: baseImage: pingcap/tidb-monitor-reloader version: v1.0.1 @@ -356,7 +356,7 @@ spec: type: NodePort initializer: baseImage: pingcap/tidb-monitor-initializer - version: v6.1.0 + version: v6.5.0 reloader: baseImage: pingcap/tidb-monitor-reloader version: v1.0.1 diff --git a/en/pd-recover.md b/en/pd-recover.md index f5c1afeb62..94e8d36de6 100644 --- a/en/pd-recover.md +++ b/en/pd-recover.md @@ -17,7 +17,7 @@ PD Recover is a disaster recovery tool of [PD](https://docs.pingcap.com/tidb/sta wget https://download.pingcap.org/tidb-${version}-linux-amd64.tar.gz ``` - In the command above, `${version}` is the version of the TiDB cluster, such as `v6.1.0`. + In the command above, `${version}` is the version of the TiDB cluster, such as `v6.5.0`. 2. Unpack the TiDB package for installation: diff --git a/en/restart-a-tidb-cluster.md b/en/restart-a-tidb-cluster.md index 85125c553b..dbee86efbd 100644 --- a/en/restart-a-tidb-cluster.md +++ b/en/restart-a-tidb-cluster.md @@ -31,7 +31,7 @@ kind: TidbCluster metadata: name: basic spec: - version: v6.1.0 + version: v6.5.0 timezone: UTC pvReclaimPolicy: Delete pd: diff --git a/en/upgrade-a-tidb-cluster.md b/en/upgrade-a-tidb-cluster.md index 0931bf0fd4..c86f953d86 100644 --- a/en/upgrade-a-tidb-cluster.md +++ b/en/upgrade-a-tidb-cluster.md @@ -40,7 +40,7 @@ During the rolling update, TiDB Operator automatically completes Leader transfer The `version` field has following formats: - - `spec.version`: the format is `imageTag`, such as `v6.1.0` + - `spec.version`: the format is `imageTag`, such as `v6.5.0` - `spec..version`: the format is `imageTag`, such as `v3.1.0` 2. Check the upgrade progress: diff --git a/zh/access-dashboard.md b/zh/access-dashboard.md index a6c9b647b3..e2124015ad 100644 --- a/zh/access-dashboard.md +++ b/zh/access-dashboard.md @@ -234,7 +234,7 @@ spec: ngMonitoring: requests: storage: 10Gi - version: v6.1.0 + version: v6.5.0 # storageClassName: default baseImage: pingcap/ng-monitoring ``` diff --git a/zh/advanced-statefulset.md b/zh/advanced-statefulset.md index 1fb5bfb6b0..7dc16e0fe7 100644 --- a/zh/advanced-statefulset.md +++ b/zh/advanced-statefulset.md @@ -92,7 +92,7 @@ kind: TidbCluster metadata: name: asts spec: - version: v6.1.0 + version: v6.5.0 timezone: UTC pvReclaimPolicy: Delete pd: @@ -144,7 +144,7 @@ metadata: tikv.tidb.pingcap.com/delete-slots: '[1]' name: asts spec: - version: v6.1.0 + version: v6.5.0 timezone: UTC pvReclaimPolicy: Delete pd: @@ -198,7 +198,7 @@ metadata: tikv.tidb.pingcap.com/delete-slots: '[]' name: asts spec: - version: v6.1.0 + version: v6.5.0 timezone: UTC pvReclaimPolicy: Delete pd: diff --git a/zh/aggregate-multiple-cluster-monitor-data.md b/zh/aggregate-multiple-cluster-monitor-data.md index 0b8226fa58..e61462d9e6 100644 --- a/zh/aggregate-multiple-cluster-monitor-data.md +++ b/zh/aggregate-multiple-cluster-monitor-data.md @@ -170,7 +170,7 @@ spec: version: 7.5.11 initializer: baseImage: registry.cn-beijing.aliyuncs.com/tidb/tidb-monitor-initializer - version: v6.1.0 + version: v6.5.0 reloader: baseImage: registry.cn-beijing.aliyuncs.com/tidb/tidb-monitor-reloader version: v1.0.1 diff --git a/zh/configure-a-tidb-cluster.md b/zh/configure-a-tidb-cluster.md index ade19416e6..40f5690a7c 100644 --- a/zh/configure-a-tidb-cluster.md +++ b/zh/configure-a-tidb-cluster.md @@ -40,9 +40,9 @@ category: how-to 相关参数的格式如下: -- `spec.version`,格式为 `imageTag`,例如 `v6.1.0` +- `spec.version`,格式为 `imageTag`,例如 `v6.5.0` - `spec..baseImage`,格式为 `imageName`,例如 `pingcap/tidb` -- `spec..version`,格式为 `imageTag`,例如 `v6.1.0` +- `spec..version`,格式为 `imageTag`,例如 `v6.5.0` ### 推荐配置 diff --git a/zh/deploy-heterogeneous-tidb-cluster.md b/zh/deploy-heterogeneous-tidb-cluster.md index 08cbff712c..7b0b6f124f 100644 --- a/zh/deploy-heterogeneous-tidb-cluster.md +++ b/zh/deploy-heterogeneous-tidb-cluster.md @@ -50,7 +50,7 @@ summary: 本文档介绍如何为已有的 TiDB 集群部署一个异构集群 name: ${heterogeneous_cluster_name} spec: configUpdateStrategy: RollingUpdate - version: v6.1.0 + version: v6.5.0 timezone: UTC pvReclaimPolicy: Delete discovery: {} @@ -129,7 +129,7 @@ summary: 本文档介绍如何为已有的 TiDB 集群部署一个异构集群 tlsCluster: enabled: true configUpdateStrategy: RollingUpdate - version: v6.1.0 + version: v6.5.0 timezone: UTC pvReclaimPolicy: Delete discovery: {} @@ -219,7 +219,7 @@ summary: 本文档介绍如何为已有的 TiDB 集群部署一个异构集群 version: 7.5.11 initializer: baseImage: pingcap/tidb-monitor-initializer - version: v6.1.0 + version: v6.5.0 reloader: baseImage: pingcap/tidb-monitor-reloader version: v1.0.1 diff --git a/zh/deploy-on-gcp-gke.md b/zh/deploy-on-gcp-gke.md index 835472e97f..1794aa2034 100644 --- a/zh/deploy-on-gcp-gke.md +++ b/zh/deploy-on-gcp-gke.md @@ -269,7 +269,7 @@ gcloud compute instances create bastion \ $ mysql --comments -h 10.128.15.243 -P 4000 -u root Welcome to the MariaDB monitor. Commands end with ; or \g. Your MySQL connection id is 7823 - Server version: 5.7.25-TiDB-v6.1.0 TiDB Server (Apache License 2.0) Community Edition, MySQL 5.7 compatible + Server version: 5.7.25-TiDB-v6.5.0 TiDB Server (Apache License 2.0) Community Edition, MySQL 5.7 compatible Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others. diff --git a/zh/deploy-on-general-kubernetes.md b/zh/deploy-on-general-kubernetes.md index c05dafa32c..b6635f23b1 100644 --- a/zh/deploy-on-general-kubernetes.md +++ b/zh/deploy-on-general-kubernetes.md @@ -43,17 +43,17 @@ summary: 介绍如何在标准 Kubernetes 集群上通过 TiDB Operator 部署 T 如果服务器没有外网,需要在有外网的机器上将 TiDB 集群用到的 Docker 镜像下载下来并上传到服务器上,然后使用 `docker load` 将 Docker 镜像安装到服务器上。 - 部署一套 TiDB 集群会用到下面这些 Docker 镜像(假设 TiDB 集群的版本是 v6.1.0): + 部署一套 TiDB 集群会用到下面这些 Docker 镜像(假设 TiDB 集群的版本是 v6.5.0): ```shell - pingcap/pd:v6.1.0 - pingcap/tikv:v6.1.0 - pingcap/tidb:v6.1.0 - pingcap/tidb-binlog:v6.1.0 - pingcap/ticdc:v6.1.0 - pingcap/tiflash:v6.1.0 + pingcap/pd:v6.5.0 + pingcap/tikv:v6.5.0 + pingcap/tidb:v6.5.0 + pingcap/tidb-binlog:v6.5.0 + pingcap/ticdc:v6.5.0 + pingcap/tiflash:v6.5.0 pingcap/tidb-monitor-reloader:v1.0.1 - pingcap/tidb-monitor-initializer:v6.1.0 + pingcap/tidb-monitor-initializer:v6.5.0 grafana/grafana:6.0.1 prom/prometheus:v2.18.1 busybox:1.26.2 @@ -64,26 +64,26 @@ summary: 介绍如何在标准 Kubernetes 集群上通过 TiDB Operator 部署 T {{< copyable "shell-regular" >}} ```shell - docker pull pingcap/pd:v6.1.0 - docker pull pingcap/tikv:v6.1.0 - docker pull pingcap/tidb:v6.1.0 - docker pull pingcap/tidb-binlog:v6.1.0 - docker pull pingcap/ticdc:v6.1.0 - docker pull pingcap/tiflash:v6.1.0 + docker pull pingcap/pd:v6.5.0 + docker pull pingcap/tikv:v6.5.0 + docker pull pingcap/tidb:v6.5.0 + docker pull pingcap/tidb-binlog:v6.5.0 + docker pull pingcap/ticdc:v6.5.0 + docker pull pingcap/tiflash:v6.5.0 docker pull pingcap/tidb-monitor-reloader:v1.0.1 - docker pull pingcap/tidb-monitor-initializer:v6.1.0 + docker pull pingcap/tidb-monitor-initializer:v6.5.0 docker pull grafana/grafana:6.0.1 docker pull prom/prometheus:v2.18.1 docker pull busybox:1.26.2 - docker save -o pd-v6.1.0.tar pingcap/pd:v6.1.0 - docker save -o tikv-v6.1.0.tar pingcap/tikv:v6.1.0 - docker save -o tidb-v6.1.0.tar pingcap/tidb:v6.1.0 - docker save -o tidb-binlog-v6.1.0.tar pingcap/tidb-binlog:v6.1.0 - docker save -o ticdc-v6.1.0.tar pingcap/ticdc:v6.1.0 - docker save -o tiflash-v6.1.0.tar pingcap/tiflash:v6.1.0 + docker save -o pd-v6.5.0.tar pingcap/pd:v6.5.0 + docker save -o tikv-v6.5.0.tar pingcap/tikv:v6.5.0 + docker save -o tidb-v6.5.0.tar pingcap/tidb:v6.5.0 + docker save -o tidb-binlog-v6.5.0.tar pingcap/tidb-binlog:v6.5.0 + docker save -o ticdc-v6.5.0.tar pingcap/ticdc:v6.5.0 + docker save -o tiflash-v6.5.0.tar pingcap/tiflash:v6.5.0 docker save -o tidb-monitor-reloader-v1.0.1.tar pingcap/tidb-monitor-reloader:v1.0.1 - docker save -o tidb-monitor-initializer-v6.1.0.tar pingcap/tidb-monitor-initializer:v6.1.0 + docker save -o tidb-monitor-initializer-v6.5.0.tar pingcap/tidb-monitor-initializer:v6.5.0 docker save -o grafana-6.0.1.tar grafana/grafana:6.0.1 docker save -o prometheus-v2.18.1.tar prom/prometheus:v2.18.1 docker save -o busybox-1.26.2.tar busybox:1.26.2 @@ -94,14 +94,14 @@ summary: 介绍如何在标准 Kubernetes 集群上通过 TiDB Operator 部署 T {{< copyable "shell-regular" >}} ```shell - docker load -i pd-v6.1.0.tar - docker load -i tikv-v6.1.0.tar - docker load -i tidb-v6.1.0.tar - docker load -i tidb-binlog-v6.1.0.tar - docker load -i ticdc-v6.1.0.tar - docker load -i tiflash-v6.1.0.tar + docker load -i pd-v6.5.0.tar + docker load -i tikv-v6.5.0.tar + docker load -i tidb-v6.5.0.tar + docker load -i tidb-binlog-v6.5.0.tar + docker load -i ticdc-v6.5.0.tar + docker load -i tiflash-v6.5.0.tar docker load -i tidb-monitor-reloader-v1.0.1.tar - docker load -i tidb-monitor-initializer-v6.1.0.tar + docker load -i tidb-monitor-initializer-v6.5.0.tar docker load -i grafana-6.0.1.tar docker load -i prometheus-v2.18.1.tar docker load -i busybox-1.26.2.tar diff --git a/zh/deploy-tidb-binlog.md b/zh/deploy-tidb-binlog.md index 3ab7c64305..c2dfff9923 100644 --- a/zh/deploy-tidb-binlog.md +++ b/zh/deploy-tidb-binlog.md @@ -25,7 +25,7 @@ spec ... pump: baseImage: pingcap/tidb-binlog - version: v6.1.0 + version: v6.5.0 replicas: 1 storageClassName: local-storage requests: @@ -44,7 +44,7 @@ spec ... pump: baseImage: pingcap/tidb-binlog - version: v6.1.0 + version: v6.5.0 replicas: 1 storageClassName: local-storage requests: @@ -181,7 +181,7 @@ spec ```yaml clusterName: example-tidb - clusterVersion: v6.1.0 + clusterVersion: v6.5.0 baseImage: pingcap/tidb-binlog storageClassName: local-storage storage: 10Gi diff --git a/zh/deploy-tidb-cluster-across-multiple-kubernetes.md b/zh/deploy-tidb-cluster-across-multiple-kubernetes.md index adc49a9c46..4623d141dc 100644 --- a/zh/deploy-tidb-cluster-across-multiple-kubernetes.md +++ b/zh/deploy-tidb-cluster-across-multiple-kubernetes.md @@ -52,7 +52,7 @@ kind: TidbCluster metadata: name: "${tc_name_1}" spec: - version: v6.1.0 + version: v6.5.0 timezone: UTC pvReclaimPolicy: Delete enableDynamicConfiguration: true @@ -106,7 +106,7 @@ kind: TidbCluster metadata: name: "${tc_name_2}" spec: - version: v6.1.0 + version: v6.5.0 timezone: UTC pvReclaimPolicy: Delete enableDynamicConfiguration: true @@ -379,7 +379,7 @@ kind: TidbCluster metadata: name: "${tc_name_1}" spec: - version: v6.1.0 + version: v6.5.0 timezone: UTC tlsCluster: enabled: true @@ -437,7 +437,7 @@ kind: TidbCluster metadata: name: "${tc_name_2}" spec: - version: v6.1.0 + version: v6.5.0 timezone: UTC tlsCluster: enabled: true diff --git a/zh/deploy-tidb-dm.md b/zh/deploy-tidb-dm.md index 34361f89b2..920a3b7fc4 100644 --- a/zh/deploy-tidb-dm.md +++ b/zh/deploy-tidb-dm.md @@ -29,9 +29,9 @@ summary: 了解如何在 Kubernetes 上部署 TiDB DM 集群。 相关参数的格式如下: -- `spec.version`,格式为 `imageTag`,例如 `v6.1.0` +- `spec.version`,格式为 `imageTag`,例如 `v6.5.0` - `spec..baseImage`,格式为 `imageName`,例如 `pingcap/dm` -- `spec..version`,格式为 `imageTag`,例如 `v6.1.0` +- `spec..version`,格式为 `imageTag`,例如 `v6.5.0` TiDB Operator 仅支持部署 DM 2.0 及更新版本。 @@ -50,7 +50,7 @@ metadata: name: ${dm_cluster_name} namespace: ${namespace} spec: - version: v6.1.0 + version: v6.5.0 configUpdateStrategy: RollingUpdate pvReclaimPolicy: Retain discovery: {} @@ -140,10 +140,10 @@ kubectl apply -f ${dm_cluster_name}.yaml -n ${namespace} 如果服务器没有外网,需要按下述步骤在有外网的机器上将 DM 集群用到的 Docker 镜像下载下来并上传到服务器上,然后使用 `docker load` 将 Docker 镜像安装到服务器上: -1. 部署一套 DM 集群会用到下面这些 Docker 镜像(假设 DM 集群的版本是 v6.1.0): +1. 部署一套 DM 集群会用到下面这些 Docker 镜像(假设 DM 集群的版本是 v6.5.0): ```shell - pingcap/dm:v6.1.0 + pingcap/dm:v6.5.0 ``` 2. 通过下面的命令将所有这些镜像下载下来: @@ -151,9 +151,9 @@ kubectl apply -f ${dm_cluster_name}.yaml -n ${namespace} {{< copyable "shell-regular" >}} ```shell - docker pull pingcap/dm:v6.1.0 + docker pull pingcap/dm:v6.5.0 - docker save -o dm-v6.1.0.tar pingcap/dm:v6.1.0 + docker save -o dm-v6.5.0.tar pingcap/dm:v6.5.0 ``` 3. 将这些 Docker 镜像上传到服务器上,并执行 `docker load` 将这些 Docker 镜像安装到服务器上: @@ -161,7 +161,7 @@ kubectl apply -f ${dm_cluster_name}.yaml -n ${namespace} {{< copyable "shell-regular" >}} ```shell - docker load -i dm-v6.1.0.tar + docker load -i dm-v6.5.0.tar ``` 部署 DM 集群完成后,通过下面命令查看 Pod 状态: diff --git a/zh/deploy-tidb-monitor-across-multiple-kubernetes.md b/zh/deploy-tidb-monitor-across-multiple-kubernetes.md index b5ad634a96..7ddfd1705e 100644 --- a/zh/deploy-tidb-monitor-across-multiple-kubernetes.md +++ b/zh/deploy-tidb-monitor-across-multiple-kubernetes.md @@ -75,7 +75,7 @@ Push 方式指利用 Prometheus remote-write 的特性,使位于不同 Kuberne #region: us-east-1 initializer: baseImage: pingcap/tidb-monitor-initializer - version: v6.1.0 + version: v6.5.0 persistent: true storage: 100Gi storageClassName: ${storageclass_name} @@ -159,7 +159,7 @@ Pull 方式是指从不同 Kubernetes 集群的 Prometheus 实例中拉取监控 #region: us-east-1 initializer: baseImage: pingcap/tidb-monitor-initializer - version: v6.1.0 + version: v6.5.0 persistent: true storage: 20Gi storageClassName: ${storageclass_name} @@ -245,7 +245,7 @@ Pull 方式是指从不同 Kubernetes 集群的 Prometheus 实例中拉取监控 #region: us-east-1 initializer: baseImage: pingcap/tidb-monitor-initializer - version: v6.1.0 + version: v6.5.0 persistent: true storage: 20Gi storageClassName: ${storageclass_name} @@ -293,7 +293,7 @@ scrape_configs: ```shell # set tidb version here - version=v6.1.0 + version=v6.5.0 docker run --rm -i -v ${PWD}/dashboards:/dashboards/ pingcap/tidb-monitor-initializer:${version} && \ cd dashboards ``` diff --git a/zh/enable-monitor-shards.md b/zh/enable-monitor-shards.md index 458aa70cf9..846c1d2ad4 100644 --- a/zh/enable-monitor-shards.md +++ b/zh/enable-monitor-shards.md @@ -34,7 +34,7 @@ spec: version: v2.27.1 initializer: baseImage: pingcap/tidb-monitor-initializer - version: v6.1.0 + version: v6.5.0 reloader: baseImage: pingcap/tidb-monitor-reloader version: v1.0.1 diff --git a/zh/enable-tls-between-components.md b/zh/enable-tls-between-components.md index e615949724..f5a3196964 100644 --- a/zh/enable-tls-between-components.md +++ b/zh/enable-tls-between-components.md @@ -1313,7 +1313,7 @@ summary: 在 Kubernetes 上如何为 TiDB 集群组件间开启 TLS。 spec: tlsCluster: enabled: true - version: v6.1.0 + version: v6.5.0 timezone: UTC pvReclaimPolicy: Retain pd: @@ -1372,7 +1372,7 @@ summary: 在 Kubernetes 上如何为 TiDB 集群组件间开启 TLS。 version: 7.5.11 initializer: baseImage: pingcap/tidb-monitor-initializer - version: v6.1.0 + version: v6.5.0 reloader: baseImage: pingcap/tidb-monitor-reloader version: v1.0.1 diff --git a/zh/enable-tls-for-dm.md b/zh/enable-tls-for-dm.md index 7a41a08e48..23a34aab58 100644 --- a/zh/enable-tls-for-dm.md +++ b/zh/enable-tls-for-dm.md @@ -491,7 +491,7 @@ metadata: spec: tlsCluster: enabled: true - version: v6.1.0 + version: v6.5.0 pvReclaimPolicy: Retain discovery: {} master: @@ -559,7 +559,7 @@ metadata: name: ${cluster_name} namespace: ${namespace} spec: - version: v6.1.0 + version: v6.5.0 pvReclaimPolicy: Retain discovery: {} tlsClientSecretNames: diff --git a/zh/enable-tls-for-mysql-client.md b/zh/enable-tls-for-mysql-client.md index 008dbb80a0..a9855c666d 100644 --- a/zh/enable-tls-for-mysql-client.md +++ b/zh/enable-tls-for-mysql-client.md @@ -549,7 +549,7 @@ summary: 在 Kubernetes 上如何为 TiDB 集群的 MySQL 客户端开启 TLS。 name: ${cluster_name} namespace: ${namespace} spec: - version: v6.1.0 + version: v6.5.0 timezone: UTC pvReclaimPolicy: Retain pd: diff --git a/zh/get-started.md b/zh/get-started.md index b84c46907b..95a7fccba2 100644 --- a/zh/get-started.md +++ b/zh/get-started.md @@ -489,7 +489,7 @@ mysql --comments -h 127.0.0.1 -P 14000 -u root ``` Welcome to the MariaDB monitor. Commands end with ; or \g. Your MySQL connection id is 178505 -Server version: 5.7.25-TiDB-v6.1.0 TiDB Server (Apache License 2.0) Community Edition, MySQL 5.7 compatible +Server version: 5.7.25-TiDB-v6.5.0 TiDB Server (Apache License 2.0) Community Edition, MySQL 5.7 compatible Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others. @@ -538,10 +538,10 @@ mysql> select * from information_schema.tikv_region_status where db_name=databas ```sql mysql> select tidb_version()\G *************************** 1. row *************************** - tidb_version(): Release Version: v6.1.0 + tidb_version(): Release Version: v6.5.0 Edition: Community Git Commit Hash: 4a1b2e9fe5b5afb1068c56de47adb07098d768d6 - Git Branch: heads/refs/tags/v6.1.0 + Git Branch: heads/refs/tags/v6.5.0 UTC Build Time: 2021-11-24 13:32:39 GoVersion: go1.16.4 Race Enabled: false @@ -733,7 +733,7 @@ mysql --comments -h 127.0.0.1 -P 24000 -u root -e 'select tidb_version()\G' ``` *************************** 1. row *************************** -tidb_version(): Release Version: v6.1.0-alpha-445-g778e188fa +tidb_version(): Release Version: v6.5.0-alpha-445-g778e188fa Edition: Community Git Commit Hash: 778e188fa7af4f48497ff9e05ca6681bf9a5fa16 Git Branch: master diff --git a/zh/monitor-a-tidb-cluster.md b/zh/monitor-a-tidb-cluster.md index b78cfa5481..adb6402689 100644 --- a/zh/monitor-a-tidb-cluster.md +++ b/zh/monitor-a-tidb-cluster.md @@ -48,7 +48,7 @@ spec: type: NodePort initializer: baseImage: pingcap/tidb-monitor-initializer - version: v6.1.0 + version: v6.5.0 reloader: baseImage: pingcap/tidb-monitor-reloader version: v1.0.1 @@ -170,7 +170,7 @@ spec: type: NodePort initializer: baseImage: pingcap/tidb-monitor-initializer - version: v6.1.0 + version: v6.5.0 reloader: baseImage: pingcap/tidb-monitor-reloader version: v1.0.1 @@ -227,7 +227,7 @@ spec: foo: "bar" initializer: baseImage: pingcap/tidb-monitor-initializer - version: v6.1.0 + version: v6.5.0 reloader: baseImage: pingcap/tidb-monitor-reloader version: v1.0.1 @@ -269,7 +269,7 @@ spec: type: ClusterIP initializer: baseImage: pingcap/tidb-monitor-initializer - version: v6.1.0 + version: v6.5.0 reloader: baseImage: pingcap/tidb-monitor-reloader version: v1.0.1 @@ -350,7 +350,7 @@ spec: type: NodePort initializer: baseImage: pingcap/tidb-monitor-initializer - version: v6.1.0 + version: v6.5.0 reloader: baseImage: pingcap/tidb-monitor-reloader version: v1.0.1 diff --git a/zh/pd-recover.md b/zh/pd-recover.md index 2da8e9cf78..3cbac57102 100644 --- a/zh/pd-recover.md +++ b/zh/pd-recover.md @@ -17,7 +17,7 @@ PD Recover 是对 PD 进行灾难性恢复的工具,用于恢复无法正常 wget https://download.pingcap.org/tidb-${version}-linux-amd64.tar.gz ``` - `${version}` 是 TiDB 集群版本,例如,`v6.1.0`。 + `${version}` 是 TiDB 集群版本,例如,`v6.5.0`。 2. 解压安装包: diff --git a/zh/restart-a-tidb-cluster.md b/zh/restart-a-tidb-cluster.md index 2dfdfbea33..7ed69d2934 100644 --- a/zh/restart-a-tidb-cluster.md +++ b/zh/restart-a-tidb-cluster.md @@ -21,7 +21,7 @@ kind: TidbCluster metadata: name: basic spec: - version: v6.1.0 + version: v6.5.0 timezone: UTC pvReclaimPolicy: Delete pd: