diff --git a/en/advanced-statefulset.md b/en/advanced-statefulset.md index 0b4adbc7aa..d2a4a4cd8d 100644 --- a/en/advanced-statefulset.md +++ b/en/advanced-statefulset.md @@ -20,7 +20,7 @@ The [advanced StatefulSet controller](https://github.com/pingcap/advanced-statef {{< copyable "shell-regular" >}} ```shell - kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/master/manifests/advanced-statefulset-crd.v1beta1.yaml + kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.4.0/manifests/advanced-statefulset-crd.v1beta1.yaml ``` * For Kubernetes versions >= 1.16: @@ -28,7 +28,7 @@ The [advanced StatefulSet controller](https://github.com/pingcap/advanced-statef {{< copyable "shell-regular" >}} ``` - kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/master/manifests/advanced-statefulset-crd.v1.yaml + kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.4.0/manifests/advanced-statefulset-crd.v1.yaml ``` 2. Enable the `AdvancedStatefulSet` feature in `values.yaml` of the TiDB Operator chart: diff --git a/en/aggregate-multiple-cluster-monitor-data.md b/en/aggregate-multiple-cluster-monitor-data.md index 00d284cea4..cf6f137e7d 100644 --- a/en/aggregate-multiple-cluster-monitor-data.md +++ b/en/aggregate-multiple-cluster-monitor-data.md @@ -24,7 +24,7 @@ Thanos provides [Thanos Query](https://thanos.io/tip/components/query.md/) compo {{< copyable "shell-regular" >}} ```shell - kubectl -n ${namespace} apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/master/examples/monitor-with-thanos/tidb-monitor.yaml + kubectl -n ${namespace} apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.4.0/examples/monitor-with-thanos/tidb-monitor.yaml ``` 2. Deploy the Thanos Query component. @@ -34,7 +34,7 @@ Thanos provides [Thanos Query](https://thanos.io/tip/components/query.md/) compo {{< copyable "shell-regular" >}} ``` - curl -sl -O https://raw.githubusercontent.com/pingcap/tidb-operator/master/examples/monitor-with-thanos/thanos-query.yaml + curl -sl -O https://raw.githubusercontent.com/pingcap/tidb-operator/v1.4.0/examples/monitor-with-thanos/thanos-query.yaml ``` 2. Manually modify the `--store` parameter in the `thanos-query.yaml` file by updating `basic-prometheus:10901` to `basic-prometheus.${namespace}:10901`. diff --git a/en/backup-to-s3.md b/en/backup-to-s3.md index e2d9d1ae60..80bb48e476 100644 --- a/en/backup-to-s3.md +++ b/en/backup-to-s3.md @@ -48,12 +48,12 @@ GRANT ### Step 1: Prepare for ad-hoc full backup -1. Execute the following command to create the role-based access control (RBAC) resources in the `tidb-cluster` namespace based on [backup-rbac.yaml](https://raw.githubusercontent.com/pingcap/tidb-operator/master/manifests/backup/backup-rbac.yaml): +1. Execute the following command to create the role-based access control (RBAC) resources in the `tidb-cluster` namespace based on [backup-rbac.yaml](https://raw.githubusercontent.com/pingcap/tidb-operator/v1.4.0/manifests/backup/backup-rbac.yaml): {{< copyable "shell-regular" >}} ```shell - kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/master/manifests/backup/backup-rbac.yaml -n tidb-cluster + kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.4.0/manifests/backup/backup-rbac.yaml -n tidb-cluster ``` 2. Grant permissions to the remote storage. diff --git a/en/cheat-sheet.md b/en/cheat-sheet.md index f79837dc5b..68e51ab36d 100644 --- a/en/cheat-sheet.md +++ b/en/cheat-sheet.md @@ -492,7 +492,7 @@ For example: {{< copyable "shell-regular" >}} ```shell -helm inspect values pingcap/tidb-operator --version=v1.4.0-beta.3 > values-tidb-operator.yaml +helm inspect values pingcap/tidb-operator --version=v1.4.0 > values-tidb-operator.yaml ``` ### Deploy using Helm chart @@ -508,7 +508,7 @@ For example: {{< copyable "shell-regular" >}} ```shell -helm install tidb-operator pingcap/tidb-operator --namespace=tidb-admin --version=v1.4.0-beta.3 -f values-tidb-operator.yaml +helm install tidb-operator pingcap/tidb-operator --namespace=tidb-admin --version=v1.4.0 -f values-tidb-operator.yaml ``` ### View the deployed Helm release @@ -532,7 +532,7 @@ For example: {{< copyable "shell-regular" >}} ```shell -helm upgrade tidb-operator pingcap/tidb-operator --version=v1.4.0-beta.3 -f values-tidb-operator.yaml +helm upgrade tidb-operator pingcap/tidb-operator --version=v1.4.0 -f values-tidb-operator.yaml ``` ### Delete Helm release diff --git a/en/configure-storage-class.md b/en/configure-storage-class.md index c4aa47888d..be8f561452 100644 --- a/en/configure-storage-class.md +++ b/en/configure-storage-class.md @@ -102,7 +102,7 @@ The `/mnt/ssd`, `/mnt/sharedssd`, `/mnt/monitoring`, and `/mnt/backup` directori {{< copyable "shell-regular" >}} ```shell - wget https://raw.githubusercontent.com/pingcap/tidb-operator/master/examples/local-pv/local-volume-provisioner.yaml + wget https://raw.githubusercontent.com/pingcap/tidb-operator/v1.4.0/examples/local-pv/local-volume-provisioner.yaml ``` 2. If you use the same discovery directory as described in [Step 1: Pre-allocate local storage](#step-1-pre-allocate-local-storage), you can skip this step. If you use a different path of discovery directory than in the previous step, you need to modify the ConfigMap and DaemonSet spec. @@ -172,7 +172,7 @@ The `/mnt/ssd`, `/mnt/sharedssd`, `/mnt/monitoring`, and `/mnt/backup` directori {{< copyable "shell-regular" >}} ```shell - kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/master/manifests/local-dind/local-volume-provisioner.yaml + kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.4.0/manifests/local-dind/local-volume-provisioner.yaml ``` 4. Check status of Pod and PV. diff --git a/en/deploy-on-alibaba-cloud.md b/en/deploy-on-alibaba-cloud.md index b8dbda2cda..f9793e516e 100644 --- a/en/deploy-on-alibaba-cloud.md +++ b/en/deploy-on-alibaba-cloud.md @@ -88,7 +88,7 @@ All the instances except ACK mandatory workers are deployed across availability tikv_count = 3 tidb_count = 2 pd_count = 3 - operator_version = "v1.4.0-beta.3" + operator_version = "v1.4.0" ``` * To deploy TiFlash in the cluster, set `create_tiflash_node_pool = true` in `terraform.tfvars`. You can also configure the node count and instance type of the TiFlash node pool by modifying `tiflash_count` and `tiflash_instance_type`. By default, the value of `tiflash_count` is `2`, and the value of `tiflash_instance_type` is `ecs.i2.2xlarge`. diff --git a/en/deploy-on-aws-eks.md b/en/deploy-on-aws-eks.md index 2663d7b5d6..8f05201f50 100644 --- a/en/deploy-on-aws-eks.md +++ b/en/deploy-on-aws-eks.md @@ -301,7 +301,7 @@ The following `c5d.4xlarge` example shows how to configure StorageClass for the 2. [Mount the local storage](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner/blob/master/docs/operations.md#use-a-whole-disk-as-a-filesystem-pv) to the `/mnt/ssd` directory. - 3. According to the mounting configuration, modify the [local-volume-provisioner.yaml](https://raw.githubusercontent.com/pingcap/tidb-operator/master/manifests/eks/local-volume-provisioner.yaml) file. + 3. According to the mounting configuration, modify the [local-volume-provisioner.yaml](https://raw.githubusercontent.com/pingcap/tidb-operator/v1.4.0/manifests/eks/local-volume-provisioner.yaml) file. 4. Deploy and create a `local-storage` storage class using the modified `local-volume-provisioner.yaml` file. @@ -346,9 +346,9 @@ First, download the sample `TidbCluster` and `TidbMonitor` configuration files: {{< copyable "shell-regular" >}} ```shell -curl -O https://raw.githubusercontent.com/pingcap/tidb-operator/master/examples/aws/tidb-cluster.yaml && \ -curl -O https://raw.githubusercontent.com/pingcap/tidb-operator/master/examples/aws/tidb-monitor.yaml && \ -curl -O https://raw.githubusercontent.com/pingcap/tidb-operator/master/examples/aws/tidb-dashboard.yaml +curl -O https://raw.githubusercontent.com/pingcap/tidb-operator/v1.4.0/examples/aws/tidb-cluster.yaml && \ +curl -O https://raw.githubusercontent.com/pingcap/tidb-operator/v1.4.0/examples/aws/tidb-monitor.yaml && \ +curl -O https://raw.githubusercontent.com/pingcap/tidb-operator/v1.4.0/examples/aws/tidb-dashboard.yaml ``` Refer to [configure the TiDB cluster](configure-a-tidb-cluster.md) to further customize and configure the CR before applying. diff --git a/en/deploy-on-azure-aks.md b/en/deploy-on-azure-aks.md index 19088eca87..34ea630b7e 100644 --- a/en/deploy-on-azure-aks.md +++ b/en/deploy-on-azure-aks.md @@ -237,9 +237,9 @@ First, download the sample `TidbCluster` and `TidbMonitor` configuration files: {{< copyable "shell-regular" >}} ```shell -curl -O https://raw.githubusercontent.com/pingcap/tidb-operator/master/examples/aks/tidb-cluster.yaml && \ -curl -O https://raw.githubusercontent.com/pingcap/tidb-operator/master/examples/aks/tidb-monitor.yaml && \ -curl -O https://raw.githubusercontent.com/pingcap/tidb-operator/master/examples/aks/tidb-dashboard.yaml +curl -O https://raw.githubusercontent.com/pingcap/tidb-operator/v1.4.0/examples/aks/tidb-cluster.yaml && \ +curl -O https://raw.githubusercontent.com/pingcap/tidb-operator/v1.4.0/examples/aks/tidb-monitor.yaml && \ +curl -O https://raw.githubusercontent.com/pingcap/tidb-operator/v1.4.0/examples/aks/tidb-dashboard.yaml ``` Refer to [configure the TiDB cluster](configure-a-tidb-cluster.md) to further customize and configure the CR before applying. @@ -599,7 +599,7 @@ For instance types that provide local disks, refer to [Lsv2-series](https://docs {{< copyable "shell-regular" >}} ```shell - kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/master/manifests/eks/local-volume-provisioner.yaml + kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.4.0/manifests/eks/local-volume-provisioner.yaml ``` 3. Use local storage. diff --git a/en/deploy-on-gcp-gke.md b/en/deploy-on-gcp-gke.md index 394912fe8e..f7a32a49f4 100644 --- a/en/deploy-on-gcp-gke.md +++ b/en/deploy-on-gcp-gke.md @@ -134,7 +134,7 @@ If you need to simulate bare-metal performance, some GCP instance types provide {{< copyable "shell-regular" >}} ```shell - kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/master/manifests/gke/local-ssd-provision/local-ssd-provision.yaml + kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.4.0/manifests/gke/local-ssd-provision/local-ssd-provision.yaml ``` 3. Use the local storage. @@ -172,9 +172,9 @@ First, download the sample `TidbCluster` and `TidbMonitor` configuration files: {{< copyable "shell-regular" >}} ```shell -curl -O https://raw.githubusercontent.com/pingcap/tidb-operator/master/examples/gcp/tidb-cluster.yaml && \ -curl -O https://raw.githubusercontent.com/pingcap/tidb-operator/master/examples/gcp/tidb-monitor.yaml && \ -curl -O https://raw.githubusercontent.com/pingcap/tidb-operator/master/examples/gcp/tidb-dashboard.yaml +curl -O https://raw.githubusercontent.com/pingcap/tidb-operator/v1.4.0/examples/gcp/tidb-cluster.yaml && \ +curl -O https://raw.githubusercontent.com/pingcap/tidb-operator/v1.4.0/examples/gcp/tidb-monitor.yaml && \ +curl -O https://raw.githubusercontent.com/pingcap/tidb-operator/v1.4.0/examples/gcp/tidb-dashboard.yaml ``` Refer to [configure the TiDB cluster](configure-a-tidb-cluster.md) to further customize and configure the CR before applying. diff --git a/en/deploy-tidb-from-kubernetes-gke.md b/en/deploy-tidb-from-kubernetes-gke.md index 2208d0beba..16fa4a7234 100644 --- a/en/deploy-tidb-from-kubernetes-gke.md +++ b/en/deploy-tidb-from-kubernetes-gke.md @@ -96,7 +96,7 @@ If you see `Ready` for all nodes, congratulations. You've set up your first Kube TiDB Operator uses [Custom Resource Definition (CRD)](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/#customresourcedefinitions) to extend Kubernetes. Therefore, to use TiDB Operator, you must first create the `TidbCluster` CRD. ```shell -kubectl create -f https://raw.githubusercontent.com/pingcap/tidb-operator/master/manifests/crd.yaml && \ +kubectl create -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.4.0/manifests/crd.yaml && \ kubectl get crd tidbclusters.pingcap.com ``` @@ -108,7 +108,7 @@ After the `TidbCluster` CRD is created, install TiDB Operator in your Kubernetes ```shell kubectl create namespace tidb-admin -helm install --namespace tidb-admin tidb-operator pingcap/tidb-operator --version v1.4.0-beta.3 +helm install --namespace tidb-admin tidb-operator pingcap/tidb-operator --version v1.4.0 kubectl get po -n tidb-admin -l app.kubernetes.io/name=tidb-operator ``` @@ -125,13 +125,13 @@ To deploy the TiDB cluster, perform the following steps: 2. Deploy the TiDB cluster: ``` shell - kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/master/examples/basic/tidb-cluster.yaml -n demo + kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.4.0/examples/basic/tidb-cluster.yaml -n demo ``` 3. Deploy the TiDB cluster monitor: ``` shell - kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/master/examples/basic/tidb-monitor.yaml -n demo + kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.4.0/examples/basic/tidb-monitor.yaml -n demo ``` 4. View the Pod status: diff --git a/en/deploy-tidb-operator.md b/en/deploy-tidb-operator.md index 6057088349..bd8b36a3b7 100644 --- a/en/deploy-tidb-operator.md +++ b/en/deploy-tidb-operator.md @@ -44,7 +44,7 @@ TiDB Operator uses [Custom Resource Definition (CRD)](https://kubernetes.io/docs {{< copyable "shell-regular" >}} ```shell -kubectl create -f https://raw.githubusercontent.com/pingcap/tidb-operator/master/manifests/crd.yaml +kubectl create -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.4.0/manifests/crd.yaml ``` If the server cannot access the Internet, you need to download the `crd.yaml` file on a machine with Internet access before installing: @@ -52,7 +52,7 @@ If the server cannot access the Internet, you need to download the `crd.yaml` fi {{< copyable "shell-regular" >}} ```shell -wget https://raw.githubusercontent.com/pingcap/tidb-operator/master/manifests/crd.yaml +wget https://raw.githubusercontent.com/pingcap/tidb-operator/v1.4.0/manifests/crd.yaml kubectl create -f ./crd.yaml ``` @@ -100,7 +100,7 @@ When you use TiDB Operator, `tidb-scheduler` is not mandatory. Refer to [tidb-sc > **Note:** > - > `${chart_version}` represents the chart version of TiDB Operator. For example, `v1.4.0-beta.3`. You can view the currently supported versions by running the `helm search repo -l tidb-operator` command. + > `${chart_version}` represents the chart version of TiDB Operator. For example, `v1.4.0`. You can view the currently supported versions by running the `helm search repo -l tidb-operator` command. 2. Configure TiDB Operator @@ -148,15 +148,15 @@ If your server cannot access the Internet, install TiDB Operator offline by the {{< copyable "shell-regular" >}} ```shell - wget http://charts.pingcap.org/tidb-operator-v1.4.0-beta.3.tgz + wget http://charts.pingcap.org/tidb-operator-v1.4.0.tgz ``` - Copy the `tidb-operator-v1.4.0-beta.3.tgz` file to the target server and extract it to the current directory: + Copy the `tidb-operator-v1.4.0.tgz` file to the target server and extract it to the current directory: {{< copyable "shell-regular" >}} ```shell - tar zxvf tidb-operator.v1.4.0-beta.3.tgz + tar zxvf tidb-operator.v1.4.0.tgz ``` 2. Download the Docker images used by TiDB Operator @@ -168,8 +168,8 @@ If your server cannot access the Internet, install TiDB Operator offline by the {{< copyable "" >}} ```shell - pingcap/tidb-operator:v1.4.0-beta.3 - pingcap/tidb-backup-manager:v1.4.0-beta.3 + pingcap/tidb-operator:v1.4.0 + pingcap/tidb-backup-manager:v1.4.0 bitnami/kubectl:latest pingcap/advanced-statefulset:v0.3.3 k8s.gcr.io/kube-scheduler:v1.16.9 @@ -182,13 +182,13 @@ If your server cannot access the Internet, install TiDB Operator offline by the {{< copyable "shell-regular" >}} ```shell - docker pull pingcap/tidb-operator:v1.4.0-beta.3 - docker pull pingcap/tidb-backup-manager:v1.4.0-beta.3 + docker pull pingcap/tidb-operator:v1.4.0 + docker pull pingcap/tidb-backup-manager:v1.4.0 docker pull bitnami/kubectl:latest docker pull pingcap/advanced-statefulset:v0.3.3 - docker save -o tidb-operator-v1.4.0-beta.3.tar pingcap/tidb-operator:v1.4.0-beta.3 - docker save -o tidb-backup-manager-v1.4.0-beta.3.tar pingcap/tidb-backup-manager:v1.4.0-beta.3 + docker save -o tidb-operator-v1.4.0.tar pingcap/tidb-operator:v1.4.0 + docker save -o tidb-backup-manager-v1.4.0.tar pingcap/tidb-backup-manager:v1.4.0 docker save -o bitnami-kubectl.tar bitnami/kubectl:latest docker save -o advanced-statefulset-v0.3.3.tar pingcap/advanced-statefulset:v0.3.3 ``` @@ -198,8 +198,8 @@ If your server cannot access the Internet, install TiDB Operator offline by the {{< copyable "shell-regular" >}} ```shell - docker load -i tidb-operator-v1.4.0-beta.3.tar - docker load -i tidb-backup-manager-v1.4.0-beta.3.tar + docker load -i tidb-operator-v1.4.0.tar + docker load -i tidb-backup-manager-v1.4.0.tar docker load -i bitnami-kubectl.tar docker load -i advanced-statefulset-v0.3.3.tar ``` diff --git a/en/get-started.md b/en/get-started.md index 79560e8aff..dec3da9f83 100644 --- a/en/get-started.md +++ b/en/get-started.md @@ -186,7 +186,7 @@ Run the following command to install the CRDs into your cluster: {{< copyable "shell-regular" >}} ```shell -kubectl create -f https://raw.githubusercontent.com/pingcap/tidb-operator/master/manifests/crd.yaml +kubectl create -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.4.0/manifests/crd.yaml ```
@@ -251,7 +251,7 @@ This section describes how to install TiDB Operator using [Helm 3](https://helm. {{< copyable "shell-regular" >}} ```shell - helm install --namespace tidb-admin tidb-operator pingcap/tidb-operator --version v1.4.0-beta.3 + helm install --namespace tidb-admin tidb-operator pingcap/tidb-operator --version v1.4.0 ```
@@ -303,7 +303,7 @@ This section describes how to deploy a TiDB cluster and its monitoring services. ``` shell kubectl create namespace tidb-cluster && \ - kubectl -n tidb-cluster apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/master/examples/basic/tidb-cluster.yaml + kubectl -n tidb-cluster apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.4.0/examples/basic/tidb-cluster.yaml ```
@@ -323,7 +323,7 @@ If you need to deploy a TiDB cluster on an ARM64 machine, refer to [Deploy a TiD {{< copyable "shell-regular" >}} ``` shell -kubectl -n tidb-cluster apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/master/examples/basic/tidb-dashboard.yaml +kubectl -n tidb-cluster apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.4.0/examples/basic/tidb-dashboard.yaml ```
@@ -340,7 +340,7 @@ tidbdashboard.pingcap.com/basic created {{< copyable "shell-regular" >}} ``` shell -kubectl -n tidb-cluster apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/master/examples/basic/tidb-monitor.yaml +kubectl -n tidb-cluster apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.4.0/examples/basic/tidb-monitor.yaml ```
diff --git a/en/tidb-toolkit.md b/en/tidb-toolkit.md index 9d03bdd458..ecd09a1be3 100644 --- a/en/tidb-toolkit.md +++ b/en/tidb-toolkit.md @@ -200,12 +200,12 @@ helm search repo pingcap ``` NAME CHART VERSION APP VERSION DESCRIPTION -pingcap/tidb-backup v1.4.0-beta.3 A Helm chart for TiDB Backup or Restore -pingcap/tidb-cluster v1.4.0-beta.3 A Helm chart for TiDB Cluster -pingcap/tidb-drainer v1.4.0-beta.3 A Helm chart for TiDB Binlog drainer. -pingcap/tidb-lightning v1.4.0-beta.3 A Helm chart for TiDB Lightning -pingcap/tidb-operator v1.4.0-beta.3 v1.4.0-beta.3 tidb-operator Helm chart for Kubernetes -pingcap/tikv-importer v1.4.0-beta.3 A Helm chart for TiKV Importer +pingcap/tidb-backup v1.4.0 A Helm chart for TiDB Backup or Restore +pingcap/tidb-cluster v1.4.0 A Helm chart for TiDB Cluster +pingcap/tidb-drainer v1.4.0 A Helm chart for TiDB Binlog drainer. +pingcap/tidb-lightning v1.4.0 A Helm chart for TiDB Lightning +pingcap/tidb-operator v1.4.0 v1.4.0 tidb-operator Helm chart for Kubernetes +pingcap/tikv-importer v1.4.0 A Helm chart for TiKV Importer ``` When a new version of chart has been released, you can use `helm repo update` to update the repository cached locally: @@ -267,9 +267,9 @@ Use the following command to download the chart file required for cluster instal {{< copyable "shell-regular" >}} ```shell -wget http://charts.pingcap.org/tidb-operator-v1.4.0-beta.3.tgz -wget http://charts.pingcap.org/tidb-drainer-v1.4.0-beta.3.tgz -wget http://charts.pingcap.org/tidb-lightning-v1.4.0-beta.3.tgz +wget http://charts.pingcap.org/tidb-operator-v1.4.0.tgz +wget http://charts.pingcap.org/tidb-drainer-v1.4.0.tgz +wget http://charts.pingcap.org/tidb-lightning-v1.4.0.tgz ``` Copy these chart files to the server and decompress them. You can use these charts to install the corresponding components by running the `helm install` command. Take `tidb-operator` as an example: @@ -277,7 +277,7 @@ Copy these chart files to the server and decompress them. You can use these char {{< copyable "shell-regular" >}} ```shell -tar zxvf tidb-operator.v1.4.0-beta.3.tgz +tar zxvf tidb-operator.v1.4.0.tgz helm install ${release_name} ./tidb-operator --namespace=${namespace} ``` diff --git a/en/upgrade-tidb-operator.md b/en/upgrade-tidb-operator.md index 8f0753b192..be14a0c9c7 100644 --- a/en/upgrade-tidb-operator.md +++ b/en/upgrade-tidb-operator.md @@ -59,27 +59,27 @@ If your server has access to the internet, you can perform online upgrade by tak kubectl get crd tidbclusters.pingcap.com ``` - This document takes TiDB v1.4.0-beta.3 as an example. You can replace `${operator_version}` with the specific version you want to upgrade to. + This document takes TiDB v1.4.0 as an example. You can replace `${operator_version}` with the specific version you want to upgrade to. 3. Get the `values.yaml` file of the `tidb-operator` chart: {{< copyable "shell-regular" >}} ```bash - mkdir -p ${HOME}/tidb-operator/v1.4.0-beta.3 && \ - helm inspect values pingcap/tidb-operator --version=v1.4.0-beta.3 > ${HOME}/tidb-operator/v1.4.0-beta.3/values-tidb-operator.yaml + mkdir -p ${HOME}/tidb-operator/v1.4.0 && \ + helm inspect values pingcap/tidb-operator --version=v1.4.0 > ${HOME}/tidb-operator/v1.4.0/values-tidb-operator.yaml ``` -4. In the `${HOME}/tidb-operator/v1.4.0-beta.3/values-tidb-operator.yaml` file, modify the `operatorImage` version to the new TiDB Operator version. +4. In the `${HOME}/tidb-operator/v1.4.0/values-tidb-operator.yaml` file, modify the `operatorImage` version to the new TiDB Operator version. -5. If you have added customized configuration in the old `values.yaml` file, merge your customized configuration to the `${HOME}/tidb-operator/v1.4.0-beta.3/values-tidb-operator.yaml` file. +5. If you have added customized configuration in the old `values.yaml` file, merge your customized configuration to the `${HOME}/tidb-operator/v1.4.0/values-tidb-operator.yaml` file. 6. Perform upgrade: {{< copyable "shell-regular" >}} ```bash - helm upgrade tidb-operator pingcap/tidb-operator --version=v1.4.0-beta.3 -f ${HOME}/tidb-operator/v1.4.0-beta.3/values-tidb-operator.yaml + helm upgrade tidb-operator pingcap/tidb-operator --version=v1.4.0 -f ${HOME}/tidb-operator/v1.4.0/values-tidb-operator.yaml ``` 7. After all the Pods start normally, check the image of TiDB Operator: @@ -90,13 +90,13 @@ If your server has access to the internet, you can perform online upgrade by tak kubectl get po -n tidb-admin -l app.kubernetes.io/instance=tidb-operator -o yaml | grep 'image:.*operator:' ``` - If you see a similar output as follows, TiDB Operator is successfully upgraded. `v1.4.0-beta.3` represents the TiDB Operator version you have upgraded to. + If you see a similar output as follows, TiDB Operator is successfully upgraded. `v1.4.0` represents the TiDB Operator version you have upgraded to. ``` - image: pingcap/tidb-operator:v1.4.0-beta.3 - image: docker.io/pingcap/tidb-operator:v1.4.0-beta.3 - image: pingcap/tidb-operator:v1.4.0-beta.3 - image: docker.io/pingcap/tidb-operator:v1.4.0-beta.3 + image: pingcap/tidb-operator:v1.4.0 + image: docker.io/pingcap/tidb-operator:v1.4.0 + image: pingcap/tidb-operator:v1.4.0 + image: docker.io/pingcap/tidb-operator:v1.4.0 ``` ## Offline upgrade @@ -123,14 +123,14 @@ If your server cannot access the Internet, you can offline upgrade by taking the wget -O crd.yaml https://raw.githubusercontent.com/pingcap/tidb-operator/${operator_version}/manifests/crd_v1beta1.yaml ``` - This document takes TiDB v1.4.0-beta.3 as an example. You can replace `${operator_version}` with the specific version you want to upgrade to. + This document takes TiDB v1.4.0 as an example. You can replace `${operator_version}` with the specific version you want to upgrade to. 2. Download the `tidb-operator` chart package file. {{< copyable "shell-regular" >}} ```bash - wget http://charts.pingcap.org/tidb-operator-v1.4.0-beta.3.tgz + wget http://charts.pingcap.org/tidb-operator-v1.4.0.tgz ``` 3. Download the Docker images required for the new TiDB Operator version: @@ -138,11 +138,11 @@ If your server cannot access the Internet, you can offline upgrade by taking the {{< copyable "shell-regular" >}} ```bash - docker pull pingcap/tidb-operator:v1.4.0-beta.3 - docker pull pingcap/tidb-backup-manager:v1.4.0-beta.3 + docker pull pingcap/tidb-operator:v1.4.0 + docker pull pingcap/tidb-backup-manager:v1.4.0 - docker save -o tidb-operator-v1.4.0-beta.3.tar pingcap/tidb-operator:v1.4.0-beta.3 - docker save -o tidb-backup-manager-v1.4.0-beta.3.tar pingcap/tidb-backup-manager:v1.4.0-beta.3 + docker save -o tidb-operator-v1.4.0.tar pingcap/tidb-operator:v1.4.0 + docker save -o tidb-backup-manager-v1.4.0.tar pingcap/tidb-backup-manager:v1.4.0 ``` 2. Upload the downloaded files and images to the server where TiDB Operator is deployed, and install the new TiDB Operator version: @@ -170,9 +170,9 @@ If your server cannot access the Internet, you can offline upgrade by taking the {{< copyable "shell-regular" >}} ```bash - tar zxvf tidb-operator-v1.4.0-beta.3.tgz && \ - mkdir -p ${HOME}/tidb-operator/v1.4.0-beta.3 && \ - cp tidb-operator/values.yaml ${HOME}/tidb-operator/v1.4.0-beta.3/values-tidb-operator.yaml + tar zxvf tidb-operator-v1.4.0.tgz && \ + mkdir -p ${HOME}/tidb-operator/v1.4.0 && \ + cp tidb-operator/values.yaml ${HOME}/tidb-operator/v1.4.0/values-tidb-operator.yaml ``` 4. Install the Docker images on the server: @@ -180,20 +180,20 @@ If your server cannot access the Internet, you can offline upgrade by taking the {{< copyable "shell-regular" >}} ```bash - docker load -i tidb-operator-v1.4.0-beta.3.tar && \ - docker load -i tidb-backup-manager-v1.4.0-beta.3.tar + docker load -i tidb-operator-v1.4.0.tar && \ + docker load -i tidb-backup-manager-v1.4.0.tar ``` -3. In the `${HOME}/tidb-operator/v1.4.0-beta.3/values-tidb-operator.yaml` file, modify the `operatorImage` version to the new TiDB Operator version. +3. In the `${HOME}/tidb-operator/v1.4.0/values-tidb-operator.yaml` file, modify the `operatorImage` version to the new TiDB Operator version. -4. If you have added customized configuration in the old `values.yaml` file, merge your customized configuration to the `${HOME}/tidb-operator/v1.4.0-beta.3/values-tidb-operator.yaml` file. +4. If you have added customized configuration in the old `values.yaml` file, merge your customized configuration to the `${HOME}/tidb-operator/v1.4.0/values-tidb-operator.yaml` file. 5. Perform upgrade: {{< copyable "shell-regular" >}} ```bash - helm upgrade tidb-operator ./tidb-operator --version=v1.4.0-beta.3 -f ${HOME}/tidb-operator/v1.4.0-beta.3/values-tidb-operator.yaml + helm upgrade tidb-operator ./tidb-operator --version=v1.4.0 -f ${HOME}/tidb-operator/v1.4.0/values-tidb-operator.yaml ``` 6. After all the Pods start normally, check the image version of TiDB Operator: @@ -204,13 +204,13 @@ If your server cannot access the Internet, you can offline upgrade by taking the kubectl get po -n tidb-admin -l app.kubernetes.io/instance=tidb-operator -o yaml | grep 'image:.*operator:' ``` - If you see a similar output as follows, TiDB Operator is successfully upgraded. `v1.4.0-beta.3` represents the TiDB Operator version you have upgraded to. + If you see a similar output as follows, TiDB Operator is successfully upgraded. `v1.4.0` represents the TiDB Operator version you have upgraded to. ``` - image: pingcap/tidb-operator:v1.4.0-beta.3 - image: docker.io/pingcap/tidb-operator:v1.4.0-beta.3 - image: pingcap/tidb-operator:v1.4.0-beta.3 - image: docker.io/pingcap/tidb-operator:v1.4.0-beta.3 + image: pingcap/tidb-operator:v1.4.0 + image: docker.io/pingcap/tidb-operator:v1.4.0 + image: pingcap/tidb-operator:v1.4.0 + image: docker.io/pingcap/tidb-operator:v1.4.0 ``` > **Note:** diff --git a/zh/advanced-statefulset.md b/zh/advanced-statefulset.md index 51c9b89256..1fb5bfb6b0 100644 --- a/zh/advanced-statefulset.md +++ b/zh/advanced-statefulset.md @@ -20,7 +20,7 @@ Kubernetes 内置 [StatefulSet](https://kubernetes.io/docs/concepts/workloads/co {{< copyable "shell-regular" >}} ```shell - kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/master/manifests/advanced-statefulset-crd.v1beta1.yaml + kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.4.0/manifests/advanced-statefulset-crd.v1beta1.yaml ``` * Kubernetes 1.16 及之后版本: @@ -28,7 +28,7 @@ Kubernetes 内置 [StatefulSet](https://kubernetes.io/docs/concepts/workloads/co {{< copyable "shell-regular" >}} ```shell - kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/master/manifests/advanced-statefulset-crd.v1.yaml + kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.4.0/manifests/advanced-statefulset-crd.v1.yaml ``` 2. 在 TiDB Operator chart 的 `values.yaml` 中启用 `AdvancedStatefulSet` 特性: diff --git a/zh/aggregate-multiple-cluster-monitor-data.md b/zh/aggregate-multiple-cluster-monitor-data.md index 4c240526de..0b8226fa58 100644 --- a/zh/aggregate-multiple-cluster-monitor-data.md +++ b/zh/aggregate-multiple-cluster-monitor-data.md @@ -24,7 +24,7 @@ Thanos 提供了跨 Prometheus 的统一查询方案 [Thanos Query](https://than {{< copyable "shell-regular" >}} ``` - kubectl -n ${namespace} apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/master/examples/monitor-with-thanos/tidb-monitor.yaml + kubectl -n ${namespace} apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.4.0/examples/monitor-with-thanos/tidb-monitor.yaml ``` 2. 部署 Thanos Query 组件。 @@ -34,7 +34,7 @@ Thanos 提供了跨 Prometheus 的统一查询方案 [Thanos Query](https://than {{< copyable "shell-regular" >}} ``` - curl -sl -O https://raw.githubusercontent.com/pingcap/tidb-operator/master/examples/monitor-with-thanos/thanos-query.yaml + curl -sl -O https://raw.githubusercontent.com/pingcap/tidb-operator/v1.4.0/examples/monitor-with-thanos/thanos-query.yaml ``` 2. 手动修改 `thanos-query.yaml` 文件中的 `--store` 参数,将 `basic-prometheus:10901` 改为 `basic-prometheus.${namespace}:10901`。 diff --git a/zh/backup-to-s3.md b/zh/backup-to-s3.md index 2535d31a8c..4d5ab082c3 100644 --- a/zh/backup-to-s3.md +++ b/zh/backup-to-s3.md @@ -49,12 +49,12 @@ GRANT ### 第 1 步:Ad-hoc 全量备份环境准备 -1. 执行以下命令,根据 [backup-rbac.yaml](https://raw.githubusercontent.com/pingcap/tidb-operator/master/manifests/backup/backup-rbac.yaml) 在 `tidb-cluster` 命名空间创建基于角色的访问控制 (RBAC) 资源。 +1. 执行以下命令,根据 [backup-rbac.yaml](https://raw.githubusercontent.com/pingcap/tidb-operator/v1.4.0/manifests/backup/backup-rbac.yaml) 在 `tidb-cluster` 命名空间创建基于角色的访问控制 (RBAC) 资源。 {{< copyable "shell-regular" >}} ```shell - kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/master/manifests/backup/backup-rbac.yaml -n tidb-cluster + kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.4.0/manifests/backup/backup-rbac.yaml -n tidb-cluster ``` 2. 远程存储访问授权。 diff --git a/zh/cheat-sheet.md b/zh/cheat-sheet.md index 94afb79175..c6ceb21f92 100644 --- a/zh/cheat-sheet.md +++ b/zh/cheat-sheet.md @@ -492,7 +492,7 @@ helm inspect values ${chart_name} --version=${chart_version} > values.yaml {{< copyable "shell-regular" >}} ```shell -helm inspect values pingcap/tidb-operator --version=v1.4.0-beta.3 > values-tidb-operator.yaml +helm inspect values pingcap/tidb-operator --version=v1.4.0 > values-tidb-operator.yaml ``` ### 使用 Helm Chart 部署 @@ -508,7 +508,7 @@ helm install ${name} ${chart_name} --namespace=${namespace} --version=${chart_ve {{< copyable "shell-regular" >}} ```shell -helm install tidb-operator pingcap/tidb-operator --namespace=tidb-admin --version=v1.4.0-beta.3 -f values-tidb-operator.yaml +helm install tidb-operator pingcap/tidb-operator --namespace=tidb-admin --version=v1.4.0 -f values-tidb-operator.yaml ``` ### 查看已经部署的 Helm Release @@ -532,7 +532,7 @@ helm upgrade ${name} ${chart_name} --version=${chart_version} -f ${values_file} {{< copyable "shell-regular" >}} ```shell -helm upgrade tidb-operator pingcap/tidb-operator --version=v1.4.0-beta.3 -f values-tidb-operator.yaml +helm upgrade tidb-operator pingcap/tidb-operator --version=v1.4.0 -f values-tidb-operator.yaml ``` ### 删除 Helm Release diff --git a/zh/configure-storage-class.md b/zh/configure-storage-class.md index 88c4f416b9..c677d72072 100644 --- a/zh/configure-storage-class.md +++ b/zh/configure-storage-class.md @@ -102,7 +102,7 @@ Kubernetes 当前支持静态分配的本地存储。可使用 [local-static-pro {{< copyable "shell-regular" >}} ```shell - wget https://raw.githubusercontent.com/pingcap/tidb-operator/master/examples/local-pv/local-volume-provisioner.yaml + wget https://raw.githubusercontent.com/pingcap/tidb-operator/v1.4.0/examples/local-pv/local-volume-provisioner.yaml ``` 2. 如果你使用的发现路径与[第 1 步:准备本地存储](#第-1-步准备本地存储)中的示例一致,可跳过这一步。如果你使用与上一步中不同路径的发现目录,需要修改 ConfigMap 和 DaemonSet 定义。 diff --git a/zh/deploy-on-alibaba-cloud.md b/zh/deploy-on-alibaba-cloud.md index 13e9c071cb..7d955fc262 100644 --- a/zh/deploy-on-alibaba-cloud.md +++ b/zh/deploy-on-alibaba-cloud.md @@ -88,7 +88,7 @@ summary: 介绍如何在阿里云上部署 TiDB 集群。 tikv_count = 3 tidb_count = 2 pd_count = 3 - operator_version = "v1.4.0-beta.3" + operator_version = "v1.4.0" ``` 如果需要在集群中部署 TiFlash,需要在 `terraform.tfvars` 中设置 `create_tiflash_node_pool = true`,也可以设置 `tiflash_count` 和 `tiflash_instance_type` 来配置 TiFlash 节点池的节点数量和实例类型,`tiflash_count` 默认为 `2`,`tiflash_instance_type` 默认为 `ecs.i2.2xlarge`。 diff --git a/zh/deploy-on-aws-eks.md b/zh/deploy-on-aws-eks.md index c93b8331f1..0bd37ec36a 100644 --- a/zh/deploy-on-aws-eks.md +++ b/zh/deploy-on-aws-eks.md @@ -291,7 +291,7 @@ mountOptions: 2. 通过[普通挂载方式](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner/blob/master/docs/operations.md#use-a-whole-disk-as-a-filesystem-pv)将本地存储挂载到 `/mnt/ssd` 目录。 - 3. 根据本地存储的挂载情况,修改 [local-volume-provisioner.yaml](https://raw.githubusercontent.com/pingcap/tidb-operator/master/manifests/eks/local-volume-provisioner.yaml) 文件。 + 3. 根据本地存储的挂载情况,修改 [local-volume-provisioner.yaml](https://raw.githubusercontent.com/pingcap/tidb-operator/v1.4.0/manifests/eks/local-volume-provisioner.yaml) 文件。 4. 使用修改后的 `local-volume-provisioner.yaml`,部署并创建一个 `local-storage` 的 Storage Class: @@ -336,9 +336,9 @@ kubectl create namespace tidb-cluster {{< copyable "shell-regular" >}} ```shell -curl -O https://raw.githubusercontent.com/pingcap/tidb-operator/master/examples/aws/tidb-cluster.yaml && \ -curl -O https://raw.githubusercontent.com/pingcap/tidb-operator/master/examples/aws/tidb-monitor.yaml && \ -curl -O https://raw.githubusercontent.com/pingcap/tidb-operator/master/examples/aws/tidb-dashboard.yaml +curl -O https://raw.githubusercontent.com/pingcap/tidb-operator/v1.4.0/examples/aws/tidb-cluster.yaml && \ +curl -O https://raw.githubusercontent.com/pingcap/tidb-operator/v1.4.0/examples/aws/tidb-monitor.yaml && \ +curl -O https://raw.githubusercontent.com/pingcap/tidb-operator/v1.4.0/examples/aws/tidb-dashboard.yaml ``` 如需了解更详细的配置信息或者进行自定义配置,请参考[配置 TiDB 集群](configure-a-tidb-cluster.md) diff --git a/zh/deploy-on-azure-aks.md b/zh/deploy-on-azure-aks.md index 4cdb5f5d71..a7902ac78d 100644 --- a/zh/deploy-on-azure-aks.md +++ b/zh/deploy-on-azure-aks.md @@ -232,9 +232,9 @@ kubectl create namespace tidb-cluster {{< copyable "shell-regular" >}} ```shell -curl -O https://raw.githubusercontent.com/pingcap/tidb-operator/master/examples/aks/tidb-cluster.yaml && \ -curl -O https://raw.githubusercontent.com/pingcap/tidb-operator/master/examples/aks/tidb-monitor.yaml && \ -curl -O https://raw.githubusercontent.com/pingcap/tidb-operator/master/examples/aks/tidb-dashboard.yaml +curl -O https://raw.githubusercontent.com/pingcap/tidb-operator/v1.4.0/examples/aks/tidb-cluster.yaml && \ +curl -O https://raw.githubusercontent.com/pingcap/tidb-operator/v1.4.0/examples/aks/tidb-monitor.yaml && \ +curl -O https://raw.githubusercontent.com/pingcap/tidb-operator/v1.4.0/examples/aks/tidb-dashboard.yaml ``` 如需了解更详细的配置信息或者进行自定义配置,请参考[配置 TiDB 集群](configure-a-tidb-cluster.md) @@ -585,7 +585,7 @@ Azure Disk 支持多种磁盘类型。若需要低延迟、高吞吐,可以选 {{< copyable "shell-regular" >}} ```shell - kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/master/manifests/eks/local-volume-provisioner.yaml + kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.4.0/manifests/eks/local-volume-provisioner.yaml ``` 3. 使用本地存储。 diff --git a/zh/deploy-on-gcp-gke.md b/zh/deploy-on-gcp-gke.md index b9614dcaa4..835472e97f 100644 --- a/zh/deploy-on-gcp-gke.md +++ b/zh/deploy-on-gcp-gke.md @@ -129,7 +129,7 @@ mountOptions: {{< copyable "shell-regular" >}} ```shell - kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/master/manifests/gke/local-ssd-provision/local-ssd-provision.yaml + kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.4.0/manifests/gke/local-ssd-provision/local-ssd-provision.yaml ``` 3. 使用本地存储。 @@ -165,9 +165,9 @@ kubectl create namespace tidb-cluster {{< copyable "shell-regular" >}} ```shell -curl -O https://raw.githubusercontent.com/pingcap/tidb-operator/master/examples/gcp/tidb-cluster.yaml && \ -curl -O https://raw.githubusercontent.com/pingcap/tidb-operator/master/examples/gcp/tidb-monitor.yaml && \ -curl -O https://raw.githubusercontent.com/pingcap/tidb-operator/master/examples/gcp/tidb-dashboard.yaml +curl -O https://raw.githubusercontent.com/pingcap/tidb-operator/v1.4.0/examples/gcp/tidb-cluster.yaml && \ +curl -O https://raw.githubusercontent.com/pingcap/tidb-operator/v1.4.0/examples/gcp/tidb-monitor.yaml && \ +curl -O https://raw.githubusercontent.com/pingcap/tidb-operator/v1.4.0/examples/gcp/tidb-dashboard.yaml ``` 如需了解更详细的配置信息或者进行自定义配置,请参考[配置 TiDB 集群](configure-a-tidb-cluster.md) diff --git a/zh/deploy-tidb-from-kubernetes-gke.md b/zh/deploy-tidb-from-kubernetes-gke.md index ab1a880bc3..43a65f27e8 100644 --- a/zh/deploy-tidb-from-kubernetes-gke.md +++ b/zh/deploy-tidb-from-kubernetes-gke.md @@ -93,7 +93,7 @@ kubectl get nodes TiDB Operator 使用 [Custom Resource Definition (CRD)](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/#customresourcedefinitions) 扩展 Kubernetes,所以要使用 TiDB Operator,必须先创建 `TidbCluster` 等各种自定义资源类型: ```shell -kubectl create -f https://raw.githubusercontent.com/pingcap/tidb-operator/master/manifests/crd.yaml && \ +kubectl create -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.4.0/manifests/crd.yaml && \ kubectl get crd tidbclusters.pingcap.com ``` @@ -105,7 +105,7 @@ kubectl get crd tidbclusters.pingcap.com ```shell kubectl create namespace tidb-admin -helm install --namespace tidb-admin tidb-operator pingcap/tidb-operator --version v1.4.0-beta.3 +helm install --namespace tidb-admin tidb-operator pingcap/tidb-operator --version v1.4.0 kubectl get po -n tidb-admin -l app.kubernetes.io/name=tidb-operator ``` @@ -122,13 +122,13 @@ kubectl get po -n tidb-admin -l app.kubernetes.io/name=tidb-operator 2. 部署 TiDB 集群: ``` shell - kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/master/examples/basic/tidb-cluster.yaml -n demo + kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.4.0/examples/basic/tidb-cluster.yaml -n demo ``` 3. 部署 TiDB 集群监控: ``` shell - kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/master/examples/basic/tidb-monitor.yaml -n demo + kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.4.0/examples/basic/tidb-monitor.yaml -n demo ``` 4. 通过下面命令查看 Pod 状态: diff --git a/zh/deploy-tidb-operator.md b/zh/deploy-tidb-operator.md index 702c7a8950..19c0799dff 100644 --- a/zh/deploy-tidb-operator.md +++ b/zh/deploy-tidb-operator.md @@ -44,7 +44,7 @@ TiDB Operator 使用 [Custom Resource Definition (CRD)](https://kubernetes.io/do {{< copyable "shell-regular" >}} ```shell -kubectl create -f https://raw.githubusercontent.com/pingcap/tidb-operator/master/manifests/crd.yaml +kubectl create -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.4.0/manifests/crd.yaml ``` 如果服务器没有外网,需要先用有外网的机器下载 `crd.yaml` 文件,然后再进行安装: @@ -52,7 +52,7 @@ kubectl create -f https://raw.githubusercontent.com/pingcap/tidb-operator/master {{< copyable "shell-regular" >}} ```shell -wget https://raw.githubusercontent.com/pingcap/tidb-operator/master/manifests/crd.yaml +wget https://raw.githubusercontent.com/pingcap/tidb-operator/v1.4.0/manifests/crd.yaml kubectl create -f ./crd.yaml ``` @@ -100,7 +100,7 @@ tidbmonitors.pingcap.com 2020-06-11T07:59:41Z > **注意:** > - > `${chart_version}` 在后续文档中代表 chart 版本,例如 `v1.4.0-beta.3`,可以通过 `helm search repo -l tidb-operator` 查看当前支持的版本。 + > `${chart_version}` 在后续文档中代表 chart 版本,例如 `v1.4.0`,可以通过 `helm search repo -l tidb-operator` 查看当前支持的版本。 2. 配置 TiDB Operator @@ -150,15 +150,15 @@ tidbmonitors.pingcap.com 2020-06-11T07:59:41Z {{< copyable "shell-regular" >}} ```shell - wget http://charts.pingcap.org/tidb-operator-v1.4.0-beta.3.tgz + wget http://charts.pingcap.org/tidb-operator-v1.4.0.tgz ``` - 将 `tidb-operator-v1.4.0-beta.3.tgz` 文件拷贝到服务器上并解压到当前目录: + 将 `tidb-operator-v1.4.0.tgz` 文件拷贝到服务器上并解压到当前目录: {{< copyable "shell-regular" >}} ```shell - tar zxvf tidb-operator.v1.4.0-beta.3.tgz + tar zxvf tidb-operator.v1.4.0.tgz ``` 2. 下载 TiDB Operator 运行所需的 Docker 镜像 @@ -168,8 +168,8 @@ tidbmonitors.pingcap.com 2020-06-11T07:59:41Z TiDB Operator 用到的 Docker 镜像有: ```shell - pingcap/tidb-operator:v1.4.0-beta.3 - pingcap/tidb-backup-manager:v1.4.0-beta.3 + pingcap/tidb-operator:v1.4.0 + pingcap/tidb-backup-manager:v1.4.0 bitnami/kubectl:latest pingcap/advanced-statefulset:v0.3.3 k8s.gcr.io/kube-scheduler:v1.16.9 @@ -182,13 +182,13 @@ tidbmonitors.pingcap.com 2020-06-11T07:59:41Z {{< copyable "shell-regular" >}} ```shell - docker pull pingcap/tidb-operator:v1.4.0-beta.3 - docker pull pingcap/tidb-backup-manager:v1.4.0-beta.3 + docker pull pingcap/tidb-operator:v1.4.0 + docker pull pingcap/tidb-backup-manager:v1.4.0 docker pull bitnami/kubectl:latest docker pull pingcap/advanced-statefulset:v0.3.3 - docker save -o tidb-operator-v1.4.0-beta.3.tar pingcap/tidb-operator:v1.4.0-beta.3 - docker save -o tidb-backup-manager-v1.4.0-beta.3.tar pingcap/tidb-backup-manager:v1.4.0-beta.3 + docker save -o tidb-operator-v1.4.0.tar pingcap/tidb-operator:v1.4.0 + docker save -o tidb-backup-manager-v1.4.0.tar pingcap/tidb-backup-manager:v1.4.0 docker save -o bitnami-kubectl.tar bitnami/kubectl:latest docker save -o advanced-statefulset-v0.3.3.tar pingcap/advanced-statefulset:v0.3.3 ``` @@ -198,8 +198,8 @@ tidbmonitors.pingcap.com 2020-06-11T07:59:41Z {{< copyable "shell-regular" >}} ```shell - docker load -i tidb-operator-v1.4.0-beta.3.tar - docker load -i tidb-backup-manager-v1.4.0-beta.3.tar + docker load -i tidb-operator-v1.4.0.tar + docker load -i tidb-backup-manager-v1.4.0.tar docker load -i bitnami-kubectl.tar docker load -i advanced-statefulset-v0.3.3.tar ``` diff --git a/zh/get-started.md b/zh/get-started.md index 48f0a6f9b6..b84c46907b 100644 --- a/zh/get-started.md +++ b/zh/get-started.md @@ -195,7 +195,7 @@ TiDB Operator 包含许多实现 TiDB 集群不同组件的自定义资源类型 {{< copyable "shell-regular" >}} ```shell -kubectl create -f https://raw.githubusercontent.com/pingcap/tidb-operator/master/manifests/crd.yaml +kubectl create -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.4.0/manifests/crd.yaml ```
@@ -260,7 +260,7 @@ customresourcedefinition.apiextensions.k8s.io/tidbclusterautoscalers.pingcap.com {{< copyable "shell-regular" >}} ```shell - helm install --namespace tidb-admin tidb-operator pingcap/tidb-operator --version v1.4.0-beta.3 + helm install --namespace tidb-admin tidb-operator pingcap/tidb-operator --version v1.4.0 ``` 如果访问 Docker Hub 网速较慢,可以使用阿里云上的镜像: @@ -268,9 +268,9 @@ customresourcedefinition.apiextensions.k8s.io/tidbclusterautoscalers.pingcap.com {{< copyable "shell-regular" >}} ``` - helm install --namespace tidb-admin tidb-operator pingcap/tidb-operator --version v1.4.0-beta.3 \ - --set operatorImage=registry.cn-beijing.aliyuncs.com/tidb/tidb-operator:v1.4.0-beta.3 \ - --set tidbBackupManagerImage=registry.cn-beijing.aliyuncs.com/tidb/tidb-backup-manager:v1.4.0-beta.3 \ + helm install --namespace tidb-admin tidb-operator pingcap/tidb-operator --version v1.4.0 \ + --set operatorImage=registry.cn-beijing.aliyuncs.com/tidb/tidb-operator:v1.4.0 \ + --set tidbBackupManagerImage=registry.cn-beijing.aliyuncs.com/tidb/tidb-backup-manager:v1.4.0 \ --set scheduler.kubeSchedulerImageName=registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler ``` @@ -323,7 +323,7 @@ tidb-scheduler-644d59b46f-4f6sb 2/2 Running 0 2m22s ``` shell kubectl create namespace tidb-cluster && \ - kubectl -n tidb-cluster apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/master/examples/basic/tidb-cluster.yaml + kubectl -n tidb-cluster apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.4.0/examples/basic/tidb-cluster.yaml ``` 如果访问 Docker Hub 网速较慢,可以使用 UCloud 上的镜像: @@ -332,7 +332,7 @@ kubectl create namespace tidb-cluster && \ ``` kubectl create namespace tidb-cluster && \ - kubectl -n tidb-cluster apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/master/examples/basic-cn/tidb-cluster.yaml + kubectl -n tidb-cluster apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.4.0/examples/basic-cn/tidb-cluster.yaml ```
@@ -352,7 +352,7 @@ tidbcluster.pingcap.com/basic created {{< copyable "shell-regular" >}} ``` shell -kubectl -n tidb-cluster apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/master/examples/basic/tidb-dashboard.yaml +kubectl -n tidb-cluster apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.4.0/examples/basic/tidb-dashboard.yaml ``` 如果访问 Docker Hub 网速较慢,可以使用 UCloud 上的镜像: @@ -360,7 +360,7 @@ kubectl -n tidb-cluster apply -f https://raw.githubusercontent.com/pingcap/tidb- {{< copyable "shell-regular" >}} ``` -kubectl -n tidb-cluster apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/master/examples/basic-cn/tidb-dashboard.yaml +kubectl -n tidb-cluster apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.4.0/examples/basic-cn/tidb-dashboard.yaml ```
@@ -377,7 +377,7 @@ tidbdashboard.pingcap.com/basic created {{< copyable "shell-regular" >}} ``` shell -kubectl -n tidb-cluster apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/master/examples/basic/tidb-monitor.yaml +kubectl -n tidb-cluster apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.4.0/examples/basic/tidb-monitor.yaml ``` 如果访问 Docker Hub 网速较慢,可以使用 UCloud 上的镜像: @@ -385,7 +385,7 @@ kubectl -n tidb-cluster apply -f https://raw.githubusercontent.com/pingcap/tidb- {{< copyable "shell-regular" >}} ``` -kubectl -n tidb-cluster apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/master/examples/basic-cn/tidb-monitor.yaml +kubectl -n tidb-cluster apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.4.0/examples/basic-cn/tidb-monitor.yaml ```
diff --git a/zh/tidb-toolkit.md b/zh/tidb-toolkit.md index a2931a3259..3b4462f675 100644 --- a/zh/tidb-toolkit.md +++ b/zh/tidb-toolkit.md @@ -200,12 +200,12 @@ helm search repo pingcap ``` NAME CHART VERSION APP VERSION DESCRIPTION -pingcap/tidb-backup v1.4.0-beta.3 A Helm chart for TiDB Backup or Restore -pingcap/tidb-cluster v1.4.0-beta.3 A Helm chart for TiDB Cluster -pingcap/tidb-drainer v1.4.0-beta.3 A Helm chart for TiDB Binlog drainer. -pingcap/tidb-lightning v1.4.0-beta.3 A Helm chart for TiDB Lightning -pingcap/tidb-operator v1.4.0-beta.3 v1.4.0-beta.3 tidb-operator Helm chart for Kubernetes -pingcap/tikv-importer v1.4.0-beta.3 A Helm chart for TiKV Importer +pingcap/tidb-backup v1.4.0 A Helm chart for TiDB Backup or Restore +pingcap/tidb-cluster v1.4.0 A Helm chart for TiDB Cluster +pingcap/tidb-drainer v1.4.0 A Helm chart for TiDB Binlog drainer. +pingcap/tidb-lightning v1.4.0 A Helm chart for TiDB Lightning +pingcap/tidb-operator v1.4.0 v1.4.0 tidb-operator Helm chart for Kubernetes +pingcap/tikv-importer v1.4.0 A Helm chart for TiKV Importer ``` 当新版本的 chart 发布后,你可以使用 `helm repo update` 命令更新本地对于仓库的缓存: @@ -265,9 +265,9 @@ helm uninstall ${release_name} -n ${namespace} {{< copyable "shell-regular" >}} ```shell -wget http://charts.pingcap.org/tidb-operator-v1.4.0-beta.3.tgz -wget http://charts.pingcap.org/tidb-drainer-v1.4.0-beta.3.tgz -wget http://charts.pingcap.org/tidb-lightning-v1.4.0-beta.3.tgz +wget http://charts.pingcap.org/tidb-operator-v1.4.0.tgz +wget http://charts.pingcap.org/tidb-drainer-v1.4.0.tgz +wget http://charts.pingcap.org/tidb-lightning-v1.4.0.tgz ``` 将这些 chart 文件拷贝到服务器上并解压,可以通过 `helm install` 命令使用这些 chart 来安装相应组件,以 `tidb-operator` 为例: @@ -275,7 +275,7 @@ wget http://charts.pingcap.org/tidb-lightning-v1.4.0-beta.3.tgz {{< copyable "shell-regular" >}} ```shell -tar zxvf tidb-operator.v1.4.0-beta.3.tgz +tar zxvf tidb-operator.v1.4.0.tgz helm install ${release_name} ./tidb-operator --namespace=${namespace} ``` diff --git a/zh/upgrade-tidb-operator.md b/zh/upgrade-tidb-operator.md index 3911d275a8..11d33dfd07 100644 --- a/zh/upgrade-tidb-operator.md +++ b/zh/upgrade-tidb-operator.md @@ -69,27 +69,27 @@ summary: 介绍如何升级 TiDB Operator。 kubectl get crd tidbclusters.pingcap.com ``` - 本文以 TiDB Operator v1.4.0-beta.3 为例,你需要替换 `${operator_version}` 为你要升级到的 TiDB Operator 版本。 + 本文以 TiDB Operator v1.4.0 为例,你需要替换 `${operator_version}` 为你要升级到的 TiDB Operator 版本。 3. 获取你要升级的 `tidb-operator` chart 中的 `values.yaml` 文件: {{< copyable "shell-regular" >}} ```shell - mkdir -p ${HOME}/tidb-operator/v1.4.0-beta.3 && \ - helm inspect values pingcap/tidb-operator --version=v1.4.0-beta.3 > ${HOME}/tidb-operator/v1.4.0-beta.3/values-tidb-operator.yaml + mkdir -p ${HOME}/tidb-operator/v1.4.0 && \ + helm inspect values pingcap/tidb-operator --version=v1.4.0 > ${HOME}/tidb-operator/v1.4.0/values-tidb-operator.yaml ``` -4. 修改 `${HOME}/tidb-operator/v1.4.0-beta.3/values-tidb-operator.yaml` 中 `operatorImage` 镜像版本为要升级到的版本。 +4. 修改 `${HOME}/tidb-operator/v1.4.0/values-tidb-operator.yaml` 中 `operatorImage` 镜像版本为要升级到的版本。 -5. 如果你在旧版本 `values.yaml` 中设置了自定义配置,将自定义配置合并到 `${HOME}/tidb-operator/v1.4.0-beta.3/values-tidb-operator.yaml` 中。 +5. 如果你在旧版本 `values.yaml` 中设置了自定义配置,将自定义配置合并到 `${HOME}/tidb-operator/v1.4.0/values-tidb-operator.yaml` 中。 6. 执行升级: {{< copyable "shell-regular" >}} ```shell - helm upgrade tidb-operator pingcap/tidb-operator --version=v1.4.0-beta.3 -f ${HOME}/tidb-operator/v1.4.0-beta.3/values-tidb-operator.yaml + helm upgrade tidb-operator pingcap/tidb-operator --version=v1.4.0 -f ${HOME}/tidb-operator/v1.4.0/values-tidb-operator.yaml ``` 7. Pod 全部正常启动之后,运行以下命令确认 TiDB Operator 镜像版本: @@ -100,13 +100,13 @@ summary: 介绍如何升级 TiDB Operator。 kubectl get po -n tidb-admin -l app.kubernetes.io/instance=tidb-operator -o yaml | grep 'image:.*operator:' ``` - 如果输出类似下方的结果,则表示升级成功。其中,`v1.4.0-beta.3` 表示已升级到的版本号。 + 如果输出类似下方的结果,则表示升级成功。其中,`v1.4.0` 表示已升级到的版本号。 ``` - image: pingcap/tidb-operator:v1.4.0-beta.3 - image: docker.io/pingcap/tidb-operator:v1.4.0-beta.3 - image: pingcap/tidb-operator:v1.4.0-beta.3 - image: docker.io/pingcap/tidb-operator:v1.4.0-beta.3 + image: pingcap/tidb-operator:v1.4.0 + image: docker.io/pingcap/tidb-operator:v1.4.0 + image: pingcap/tidb-operator:v1.4.0 + image: docker.io/pingcap/tidb-operator:v1.4.0 ``` > **注意:** @@ -137,14 +137,14 @@ summary: 介绍如何升级 TiDB Operator。 wget -O crd.yaml https://raw.githubusercontent.com/pingcap/tidb-operator/${operator_version}/manifests/crd_v1beta1.yaml ``` - 本文以 TiDB Operator v1.4.0-beta.3 为例,你需要替换 `${operator_version}` 为你要升级到的 TiDB Operator 版本。 + 本文以 TiDB Operator v1.4.0 为例,你需要替换 `${operator_version}` 为你要升级到的 TiDB Operator 版本。 2. 下载 `tidb-operator` chart 包文件: {{< copyable "shell-regular" >}} ```shell - wget http://charts.pingcap.org/tidb-operator-v1.4.0-beta.3.tgz + wget http://charts.pingcap.org/tidb-operator-v1.4.0.tgz ``` 3. 下载 TiDB Operator 升级所需的 Docker 镜像: @@ -152,11 +152,11 @@ summary: 介绍如何升级 TiDB Operator。 {{< copyable "shell-regular" >}} ```shell - docker pull pingcap/tidb-operator:v1.4.0-beta.3 - docker pull pingcap/tidb-backup-manager:v1.4.0-beta.3 + docker pull pingcap/tidb-operator:v1.4.0 + docker pull pingcap/tidb-backup-manager:v1.4.0 - docker save -o tidb-operator-v1.4.0-beta.3.tar pingcap/tidb-operator:v1.4.0-beta.3 - docker save -o tidb-backup-manager-v1.4.0-beta.3.tar pingcap/tidb-backup-manager:v1.4.0-beta.3 + docker save -o tidb-operator-v1.4.0.tar pingcap/tidb-operator:v1.4.0 + docker save -o tidb-backup-manager-v1.4.0.tar pingcap/tidb-backup-manager:v1.4.0 ``` 2. 将下载的文件和镜像上传到需要升级的服务器上,在服务器上按照以下步骤进行安装: @@ -184,9 +184,9 @@ summary: 介绍如何升级 TiDB Operator。 {{< copyable "shell-regular" >}} ```shell - tar zxvf tidb-operator-v1.4.0-beta.3.tgz && \ - mkdir -p ${HOME}/tidb-operator/v1.4.0-beta.3 && \ - cp tidb-operator/values.yaml ${HOME}/tidb-operator/v1.4.0-beta.3/values-tidb-operator.yaml + tar zxvf tidb-operator-v1.4.0.tgz && \ + mkdir -p ${HOME}/tidb-operator/v1.4.0 && \ + cp tidb-operator/values.yaml ${HOME}/tidb-operator/v1.4.0/values-tidb-operator.yaml ``` 4. 安装 Docker 镜像到服务器上: @@ -194,20 +194,20 @@ summary: 介绍如何升级 TiDB Operator。 {{< copyable "shell-regular" >}} ```shell - docker load -i tidb-operator-v1.4.0-beta.3.tar && \ - docker load -i tidb-backup-manager-v1.4.0-beta.3.tar + docker load -i tidb-operator-v1.4.0.tar && \ + docker load -i tidb-backup-manager-v1.4.0.tar ``` -3. 修改 `${HOME}/tidb-operator/v1.4.0-beta.3/values-tidb-operator.yaml` 中 `operatorImage` 镜像版本为要升级到的版本。 +3. 修改 `${HOME}/tidb-operator/v1.4.0/values-tidb-operator.yaml` 中 `operatorImage` 镜像版本为要升级到的版本。 -4. 如果你在旧版本 `values.yaml` 中设置了自定义配置,将自定义配置合并到 `${HOME}/tidb-operator/v1.4.0-beta.3/values-tidb-operator.yaml` 中。 +4. 如果你在旧版本 `values.yaml` 中设置了自定义配置,将自定义配置合并到 `${HOME}/tidb-operator/v1.4.0/values-tidb-operator.yaml` 中。 5. 执行升级: {{< copyable "shell-regular" >}} ```shell - helm upgrade tidb-operator ./tidb-operator --version=v1.4.0-beta.3 -f ${HOME}/tidb-operator/v1.4.0-beta.3/values-tidb-operator.yaml + helm upgrade tidb-operator ./tidb-operator --version=v1.4.0 -f ${HOME}/tidb-operator/v1.4.0/values-tidb-operator.yaml ``` 6. Pod 全部正常启动之后,运行以下命令确认 TiDB Operator 镜像版本: @@ -218,13 +218,13 @@ summary: 介绍如何升级 TiDB Operator。 kubectl get po -n tidb-admin -l app.kubernetes.io/instance=tidb-operator -o yaml | grep 'image:.*operator:' ``` - 如果输出类似下方的结果,则表示升级成功。其中,`v1.4.0-beta.3` 表示已升级到的版本号。 + 如果输出类似下方的结果,则表示升级成功。其中,`v1.4.0` 表示已升级到的版本号。 ``` - image: pingcap/tidb-operator:v1.4.0-beta.3 - image: docker.io/pingcap/tidb-operator:v1.4.0-beta.3 - image: pingcap/tidb-operator:v1.4.0-beta.3 - image: docker.io/pingcap/tidb-operator:v1.4.0-beta.3 + image: pingcap/tidb-operator:v1.4.0 + image: docker.io/pingcap/tidb-operator:v1.4.0 + image: pingcap/tidb-operator:v1.4.0 + image: docker.io/pingcap/tidb-operator:v1.4.0 ``` > **注意:**