Skip to content

Commit

Permalink
en, zh: Bump Operator to v1.1.13 (#1298)
Browse files Browse the repository at this point in the history
  • Loading branch information
KanShiori authored Jul 2, 2021
1 parent bb09545 commit 2a6619d
Show file tree
Hide file tree
Showing 18 changed files with 136 additions and 136 deletions.
6 changes: 3 additions & 3 deletions en/cheat-sheet.md
Original file line number Diff line number Diff line change
Expand Up @@ -485,7 +485,7 @@ For example:
{{< copyable "shell-regular" >}}

```shell
helm inspect values pingcap/tidb-operator --version=v1.1.12 > values-tidb-operator.yaml
helm inspect values pingcap/tidb-operator --version=v1.1.13 > values-tidb-operator.yaml
```

### Deploy using Helm chart
Expand All @@ -501,7 +501,7 @@ For example:
{{< copyable "shell-regular" >}}

```shell
helm install tidb-operator pingcap/tidb-operator --namespace=tidb-admin --version=v1.1.12 -f values-tidb-operator.yaml
helm install tidb-operator pingcap/tidb-operator --namespace=tidb-admin --version=v1.1.13 -f values-tidb-operator.yaml
```

### View the deployed Helm release
Expand All @@ -525,7 +525,7 @@ For example:
{{< copyable "shell-regular" >}}

```shell
helm upgrade tidb-operator pingcap/tidb-operator --version=v1.1.12 -f values-tidb-operator.yaml
helm upgrade tidb-operator pingcap/tidb-operator --version=v1.1.13 -f values-tidb-operator.yaml
```

### Delete Helm release
Expand Down
6 changes: 3 additions & 3 deletions en/configure-storage-class.md
Original file line number Diff line number Diff line change
Expand Up @@ -77,15 +77,15 @@ The following process uses `/mnt/disks` as the discovery directory and `local-st
{{< copyable "shell-regular" >}}

```shell
kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.1.12/manifests/local-dind/local-volume-provisioner.yaml
kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.1.13/manifests/local-dind/local-volume-provisioner.yaml
```

If the server has no access to the Internet, download the `local-volume-provisioner.yaml` file on a machine with Internet access and then install it.

{{< copyable "shell-regular" >}}

```shell
wget https://raw.githubusercontent.com/pingcap/tidb-operator/v1.1.12/manifests/local-dind/local-volume-provisioner.yaml &&
wget https://raw.githubusercontent.com/pingcap/tidb-operator/v1.1.13/manifests/local-dind/local-volume-provisioner.yaml &&
kubectl apply -f ./local-volume-provisioner.yaml
```

Expand Down Expand Up @@ -254,7 +254,7 @@ Finally, execute the `kubectl apply` command to deploy `local-volume-provisioner
{{< copyable "shell-regular" >}}
```shell
kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.1.12/manifests/local-dind/local-volume-provisioner.yaml
kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.1.13/manifests/local-dind/local-volume-provisioner.yaml
```
When you later deploy tidb clusters, deploy TiDB Binlog for incremental backups, or do full backups, configure the corresponding `StorageClass` for use.
Expand Down
2 changes: 1 addition & 1 deletion en/deploy-on-alibaba-cloud.md
Original file line number Diff line number Diff line change
Expand Up @@ -89,7 +89,7 @@ All the instances except ACK mandatory workers are deployed across availability
tikv_count = 3
tidb_count = 2
pd_count = 3
operator_version = "v1.1.12"
operator_version = "v1.1.13"
```

* To deploy TiFlash in the cluster, set `create_tiflash_node_pool = true` in `terraform.tfvars`. You can also configure the node count and instance type of the TiFlash node pool by modifying `tiflash_count` and `tiflash_instance_type`. By default, the value of `tiflash_count` is `2`, and the value of `tiflash_instance_type` is `ecs.i2.2xlarge`.
Expand Down
4 changes: 2 additions & 2 deletions en/deploy-tidb-from-kubernetes-gke.md
Original file line number Diff line number Diff line change
Expand Up @@ -97,15 +97,15 @@ If you see `Ready` for all nodes, congratulations! You've set up your first Kube
TiDB Operator uses [Custom Resource Definition (CRD)](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/#customresourcedefinitions) to extend Kubernetes. Therefore, to use TiDB Operator, you must first create the `TidbCluster` CRD.

```shell
kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.1.12/manifests/crd.yaml && \
kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.1.13/manifests/crd.yaml && \
kubectl get crd tidbclusters.pingcap.com
```

After the `TidbCluster` CRD is created, install TiDB Operator in your Kubernetes cluster.

```shell
kubectl create namespace tidb-admin
helm install --namespace tidb-admin tidb-operator pingcap/tidb-operator --version v1.1.12
helm install --namespace tidb-admin tidb-operator pingcap/tidb-operator --version v1.1.13
kubectl get po -n tidb-admin -l app.kubernetes.io/name=tidb-operator
```

Expand Down
28 changes: 14 additions & 14 deletions en/deploy-tidb-operator.md
Original file line number Diff line number Diff line change
Expand Up @@ -49,15 +49,15 @@ TiDB Operator uses [Custom Resource Definition (CRD)](https://kubernetes.io/docs
{{< copyable "shell-regular" >}}

```shell
kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.1.12/manifests/crd.yaml
kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.1.13/manifests/crd.yaml
```

If the server cannot access the Internet, you need to download the `crd.yaml` file on a machine with Internet access before installing:

{{< copyable "shell-regular" >}}

```shell
wget https://raw.githubusercontent.com/pingcap/tidb-operator/v1.1.12/manifests/crd.yaml
wget https://raw.githubusercontent.com/pingcap/tidb-operator/v1.1.13/manifests/crd.yaml
kubectl apply -f ./crd.yaml
```

Expand Down Expand Up @@ -99,7 +99,7 @@ After creating CRDs in the step above, there are two methods to deploy TiDB Oper

> **Note:**
>
> `${chart_version}` represents the chart version of TiDB Operator. For example, `v1.1.12`. You can view the currently supported versions by running the `helm search repo -l tidb-operator` command.
> `${chart_version}` represents the chart version of TiDB Operator. For example, `v1.1.13`. You can view the currently supported versions by running the `helm search repo -l tidb-operator` command.

2. Configure TiDB Operator

Expand Down Expand Up @@ -143,15 +143,15 @@ If your server cannot access the Internet, install TiDB Operator offline by the
{{< copyable "shell-regular" >}}

```shell
wget http://charts.pingcap.org/tidb-operator-v1.1.12.tgz
wget http://charts.pingcap.org/tidb-operator-v1.1.13.tgz
```

Copy the `tidb-operator-v1.1.12.tgz` file to the target server and extract it to the current directory:
Copy the `tidb-operator-v1.1.13.tgz` file to the target server and extract it to the current directory:

{{< copyable "shell-regular" >}}

```shell
tar zxvf tidb-operator.v1.1.12.tgz
tar zxvf tidb-operator.v1.1.13.tgz
```

2. Download the Docker images used by TiDB Operator
Expand All @@ -163,8 +163,8 @@ If your server cannot access the Internet, install TiDB Operator offline by the
{{< copyable "" >}}

```shell
pingcap/tidb-operator:v1.1.12
pingcap/tidb-backup-manager:v1.1.12
pingcap/tidb-operator:v1.1.13
pingcap/tidb-backup-manager:v1.1.13
bitnami/kubectl:latest
pingcap/advanced-statefulset:v0.3.3
k8s.gcr.io/kube-scheduler:v1.16.9
Expand All @@ -177,13 +177,13 @@ If your server cannot access the Internet, install TiDB Operator offline by the
{{< copyable "shell-regular" >}}

```shell
docker pull pingcap/tidb-operator:v1.1.12
docker pull pingcap/tidb-backup-manager:v1.1.12
docker pull pingcap/tidb-operator:v1.1.13
docker pull pingcap/tidb-backup-manager:v1.1.13
docker pull bitnami/kubectl:latest
docker pull pingcap/advanced-statefulset:v0.3.3
docker save -o tidb-operator-v1.1.12.tar pingcap/tidb-operator:v1.1.12
docker save -o tidb-backup-manager-v1.1.12.tar pingcap/tidb-backup-manager:v1.1.12
docker save -o tidb-operator-v1.1.13.tar pingcap/tidb-operator:v1.1.13
docker save -o tidb-backup-manager-v1.1.13.tar pingcap/tidb-backup-manager:v1.1.13
docker save -o bitnami-kubectl.tar bitnami/kubectl:latest
docker save -o advanced-statefulset-v0.3.3.tar pingcap/advanced-statefulset:v0.3.3
```
Expand All @@ -193,8 +193,8 @@ If your server cannot access the Internet, install TiDB Operator offline by the
{{< copyable "shell-regular" >}}

```shell
docker load -i tidb-operator-v1.1.12.tar
docker load -i tidb-backup-manager-v1.1.12.tar
docker load -i tidb-operator-v1.1.13.tar
docker load -i tidb-backup-manager-v1.1.13.tar
docker load -i bitnami-kubectl.tar
docker load -i advanced-statefulset-v0.3.3.tar
```
Expand Down
10 changes: 5 additions & 5 deletions en/get-started.md
Original file line number Diff line number Diff line change
Expand Up @@ -252,7 +252,7 @@ Execute this command to install the CRDs into your cluster:
{{< copyable "shell-regular" >}}

```shell
kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.1.12/manifests/crd.yaml
kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.1.13/manifests/crd.yaml
```

Expected output:
Expand Down Expand Up @@ -304,17 +304,17 @@ This section describes how to install TiDB Operator using Helm 3.
{{< copyable "shell-regular" >}}

```shell
helm install --namespace tidb-admin tidb-operator pingcap/tidb-operator --version v1.1.12
helm install --namespace tidb-admin tidb-operator pingcap/tidb-operator --version v1.1.13
```

If you have trouble accessing Docker Hub, you can try images hosted in Alibaba Cloud:

{{< copyable "shell-regular" >}}

```
helm install --namespace tidb-admin tidb-operator pingcap/tidb-operator --version v1.1.12 \
--set operatorImage=registry.cn-beijing.aliyuncs.com/tidb/tidb-operator:v1.1.12 \
--set tidbBackupManagerImage=registry.cn-beijing.aliyuncs.com/tidb/tidb-backup-manager:v1.1.12 \
helm install --namespace tidb-admin tidb-operator pingcap/tidb-operator --version v1.1.13 \
--set operatorImage=registry.cn-beijing.aliyuncs.com/tidb/tidb-operator:v1.1.13 \
--set tidbBackupManagerImage=registry.cn-beijing.aliyuncs.com/tidb/tidb-backup-manager:v1.1.13 \
--set scheduler.kubeSchedulerImageName=registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler
```

Expand Down
20 changes: 10 additions & 10 deletions en/tidb-toolkit.md
Original file line number Diff line number Diff line change
Expand Up @@ -201,12 +201,12 @@ helm search repo pingcap

```
NAME CHART VERSION APP VERSION DESCRIPTION
pingcap/tidb-backup v1.1.12 A Helm chart for TiDB Backup or Restore
pingcap/tidb-cluster v1.1.12 A Helm chart for TiDB Cluster
pingcap/tidb-drainer v1.1.12 A Helm chart for TiDB Binlog drainer.
pingcap/tidb-lightning v1.1.12 A Helm chart for TiDB Lightning
pingcap/tidb-operator v1.1.12 v1.1.12 tidb-operator Helm chart for Kubernetes
pingcap/tikv-importer v1.1.12 A Helm chart for TiKV Importer
pingcap/tidb-backup v1.1.13 A Helm chart for TiDB Backup or Restore
pingcap/tidb-cluster v1.1.13 A Helm chart for TiDB Cluster
pingcap/tidb-drainer v1.1.13 A Helm chart for TiDB Binlog drainer.
pingcap/tidb-lightning v1.1.13 A Helm chart for TiDB Lightning
pingcap/tidb-operator v1.1.13 v1.1.13 tidb-operator Helm chart for Kubernetes
pingcap/tikv-importer v1.1.13 A Helm chart for TiKV Importer
```

When a new version of chart has been released, you can use `helm repo update` to update the repository cached locally:
Expand Down Expand Up @@ -268,17 +268,17 @@ Use the following command to download the chart file required for cluster instal
{{< copyable "shell-regular" >}}

```shell
wget http://charts.pingcap.org/tidb-operator-v1.1.12.tgz
wget http://charts.pingcap.org/tidb-drainer-v1.1.12.tgz
wget http://charts.pingcap.org/tidb-lightning-v1.1.12.tgz
wget http://charts.pingcap.org/tidb-operator-v1.1.13.tgz
wget http://charts.pingcap.org/tidb-drainer-v1.1.13.tgz
wget http://charts.pingcap.org/tidb-lightning-v1.1.13.tgz
```

Copy these chart files to the server and decompress them. You can use these charts to install the corresponding components by running the `helm install` command. Take `tidb-operator` as an example:

{{< copyable "shell-regular" >}}

```shell
tar zxvf tidb-operator.v1.1.12.tgz
tar zxvf tidb-operator.v1.1.13.tgz
helm install ${release_name} ./tidb-operator --namespace=${namespace}
```

Expand Down
56 changes: 28 additions & 28 deletions en/upgrade-tidb-operator.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ This document describes how to upgrade TiDB Operator and Kubernetes.
{{< copyable "shell-regular" >}}

```shell
kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.1.12/manifests/crd.yaml && \
kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.1.13/manifests/crd.yaml && \
kubectl get crd tidbclusters.pingcap.com
```

Expand All @@ -29,16 +29,16 @@ This document describes how to upgrade TiDB Operator and Kubernetes.
{{< copyable "shell-regular" >}}

```shell
mkdir -p ${HOME}/tidb-operator/v1.1.12 && \
helm inspect values pingcap/tidb-operator --version=v1.1.12 > ${HOME}/tidb-operator/v1.1.12/values-tidb-operator.yaml
mkdir -p ${HOME}/tidb-operator/v1.1.13 && \
helm inspect values pingcap/tidb-operator --version=v1.1.13 > ${HOME}/tidb-operator/v1.1.13/values-tidb-operator.yaml
```

3. In the `${HOME}/tidb-operator/v1.1.12/values-tidb-operator.yaml` file, modify the `operatorImage` version to the new TiDB Operator version. Merge the customized configuration in the old `values.yaml` file to the `${HOME}/tidb-operator/v1.1.12/values-tidb-operator.yaml` file, and then execute `helm upgrade`:
3. In the `${HOME}/tidb-operator/v1.1.13/values-tidb-operator.yaml` file, modify the `operatorImage` version to the new TiDB Operator version. Merge the customized configuration in the old `values.yaml` file to the `${HOME}/tidb-operator/v1.1.13/values-tidb-operator.yaml` file, and then execute `helm upgrade`:

{{< copyable "shell-regular" >}}

```shell
helm upgrade tidb-operator pingcap/tidb-operator --version=v1.1.12 -f ${HOME}/tidb-operator/v1.1.12/values-tidb-operator.yaml
helm upgrade tidb-operator pingcap/tidb-operator --version=v1.1.13 -f ${HOME}/tidb-operator/v1.1.13/values-tidb-operator.yaml
```

After all the Pods start normally, execute the following command to check the image of TiDB Operator:
Expand All @@ -49,13 +49,13 @@ This document describes how to upgrade TiDB Operator and Kubernetes.
kubectl get po -n tidb-admin -l app.kubernetes.io/instance=tidb-operator -o yaml | grep 'image:.*operator:'
```

If TiDB Operator is successfully upgraded, the expected output is as follows. `v1.1.12` represents the desired version of TiDB Operator.
If TiDB Operator is successfully upgraded, the expected output is as follows. `v1.1.13` represents the desired version of TiDB Operator.

```
image: pingcap/tidb-operator:v1.1.12
image: docker.io/pingcap/tidb-operator:v1.1.12
image: pingcap/tidb-operator:v1.1.12
image: docker.io/pingcap/tidb-operator:v1.1.12
image: pingcap/tidb-operator:v1.1.13
image: docker.io/pingcap/tidb-operator:v1.1.13
image: pingcap/tidb-operator:v1.1.13
image: docker.io/pingcap/tidb-operator:v1.1.13
```

> **Note:**
Expand All @@ -73,27 +73,27 @@ If your server cannot access the Internet, you can take the following steps to u
{{< copyable "shell-regular" >}}

```shell
wget https://raw.githubusercontent.com/pingcap/tidb-operator/v1.1.12/manifests/crd.yaml
wget https://raw.githubusercontent.com/pingcap/tidb-operator/v1.1.13/manifests/crd.yaml
```

2. Download the `tidb-operator` chart package file.

{{< copyable "shell-regular" >}}

```shell
wget http://charts.pingcap.org/tidb-operator-v1.1.12.tgz
wget http://charts.pingcap.org/tidb-operator-v1.1.13.tgz
```

3. Download the Docker images required for the new TiDB Operator version:

{{< copyable "shell-regular" >}}

```shell
docker pull pingcap/tidb-operator:v1.1.12
docker pull pingcap/tidb-backup-manager:v1.1.12
docker pull pingcap/tidb-operator:v1.1.13
docker pull pingcap/tidb-backup-manager:v1.1.13
docker save -o tidb-operator-v1.1.12.tar pingcap/tidb-operator:v1.1.12
docker save -o tidb-backup-manager-v1.1.12.tar pingcap/tidb-backup-manager:v1.1.12
docker save -o tidb-operator-v1.1.13.tar pingcap/tidb-operator:v1.1.13
docker save -o tidb-backup-manager-v1.1.13.tar pingcap/tidb-backup-manager:v1.1.13
```

2. Upload the downloaded files and images to the server that needs to be upgraded, and then take the following steps for installation:
Expand All @@ -111,26 +111,26 @@ If your server cannot access the Internet, you can take the following steps to u
{{< copyable "shell-regular" >}}

```shell
tar zxvf tidb-operator-v1.1.12.tgz && \
mkdir -p ${HOME}/tidb-operator/v1.1.12 &&
cp tidb-operator/values.yaml ${HOME}/tidb-operator/v1.1.12/values-tidb-operator.yaml
tar zxvf tidb-operator-v1.1.13.tgz && \
mkdir -p ${HOME}/tidb-operator/v1.1.13 &&
cp tidb-operator/values.yaml ${HOME}/tidb-operator/v1.1.13/values-tidb-operator.yaml
```

3. Install the Docker images on the server:

{{< copyable "shell-regular" >}}

```shell
docker load -i tidb-operator-v1.1.12.tar
docker load -i tidb-backup-manager-v1.1.12.tar
docker load -i tidb-operator-v1.1.13.tar
docker load -i tidb-backup-manager-v1.1.13.tar
```

3. In the `${HOME}/tidb-operator/v1.1.12/values-tidb-operator.yaml` file, modify the `operatorImage` version to the new TiDB Operator version. Merge the customized configuration in the old `values.yaml` file to the `${HOME}/tidb-operator/v1.1.12/values-tidb-operator.yaml` file, and then execute `helm upgrade`:
3. In the `${HOME}/tidb-operator/v1.1.13/values-tidb-operator.yaml` file, modify the `operatorImage` version to the new TiDB Operator version. Merge the customized configuration in the old `values.yaml` file to the `${HOME}/tidb-operator/v1.1.13/values-tidb-operator.yaml` file, and then execute `helm upgrade`:

{{< copyable "shell-regular" >}}

```shell
helm upgrade tidb-operator ./tidb-operator --version=v1.1.12 -f ${HOME}/tidb-operator/v1.1.12/values-tidb-operator.yaml
helm upgrade tidb-operator ./tidb-operator --version=v1.1.13 -f ${HOME}/tidb-operator/v1.1.13/values-tidb-operator.yaml
```

After all the Pods start normally, execute the following command to check the image version of TiDB Operator:
Expand All @@ -141,13 +141,13 @@ If your server cannot access the Internet, you can take the following steps to u
kubectl get po -n tidb-admin -l app.kubernetes.io/instance=tidb-operator -o yaml | grep 'image:.*operator:'
```

If TiDB Operator is successfully upgraded, the expected output is as follows. `v1.1.12` represents the new version of TiDB Operator.
If TiDB Operator is successfully upgraded, the expected output is as follows. `v1.1.13` represents the new version of TiDB Operator.

```
image: pingcap/tidb-operator:v1.1.12
image: docker.io/pingcap/tidb-operator:v1.1.12
image: pingcap/tidb-operator:v1.1.12
image: docker.io/pingcap/tidb-operator:v1.1.12
image: pingcap/tidb-operator:v1.1.13
image: docker.io/pingcap/tidb-operator:v1.1.13
image: pingcap/tidb-operator:v1.1.13
image: docker.io/pingcap/tidb-operator:v1.1.13
```

> **Note:**
Expand Down
Loading

0 comments on commit 2a6619d

Please sign in to comment.