Skip to content

Commit

Permalink
*: fix all markdownlint issue in docs repo (#1410)
Browse files Browse the repository at this point in the history
* *: fix all markdownlint issue in docs repo

* fix lint issues

* another batch of lint issues fixed

* fix lint issues

* finish fixing the issue in dev

* fix v3.0 issues

* remove trailing whitespace from all files in v2.1, v3.0 and dev

* Fix MD001 issues

* fix MD012 issues

* fix all ather issues

* Update dev/community.md

Co-Authored-By: Lilian Lee <lilin@pingcap.com>

* Update dev/community.md

Co-Authored-By: Lilian Lee <lilin@pingcap.com>
  • Loading branch information
yikeke and lilin90 authored Aug 5, 2019
1 parent 1d79564 commit f410df4
Show file tree
Hide file tree
Showing 630 changed files with 2,959 additions and 2,944 deletions.
6 changes: 2 additions & 4 deletions .github/ISSUE_TEMPLATE/question.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,17 +6,15 @@ about: Usage question that isn't answered in docs or discussion
## Question

**This repository is ONLY used to solve issues related to DOCS.
For other issues (related to TiDB, PD, etc), please move to [other repositories](https://github.com/pingcap/).**
For other issues (related to TiDB, PD, etc), please move to [other repositories](https://github.com/pingcap/).**

Before submitting your question, make sure you have:

- Searched existing Stack Overflow questions.
- Googled your question.
- Searched the open and closed [GitHub issues](https://github.com/pingcap/docs/issues?utf8=%E2%9C%93&q=is%3Aissue).
- Read the documentation:
- Read the documentation:
- [docs](https://github.com/pingcap/docs)
- [docs-cn](https://github.com/pingcap/docs-cn)

Now, please describe your question here:


2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -90,7 +90,7 @@ TiDB supports the ability to store data in both row-oriented and (coming soon) c
### SQL Plan Management

In both MySQL and TiDB, optimizer hints are available to override the default query execution plan with a better known plan. The problem with this approach is that it requires an application developer to make modifications to query text to inject the hint. This can also be undesirable in the case that an ORM is used to generate the query.

In TiDB 3.0, you will be able to bind queries to a specific execution plan directly within the TiDB server. This method is entirely transparent to application code.

### Open Source
Expand Down
2 changes: 1 addition & 1 deletion _index.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ TiDB can be deployed on-premise or in-cloud. The following deployment options ar
- [GKE (Google Kubernetes Engine)](/v3.0/tidb-in-kubernetes/deploy/gcp-gke.md)
- [Google Cloud Shell](/v3.0/tidb-in-kubernetes/get-started/deploy-tidb-from-kubernetes-gke.md)
- [Alibaba Cloud ACK (Container Service for Kubernetes)](/v3.0/tidb-in-kubernetes/deploy/alibaba-cloud.md)

Or deploy TiDB locally using:

- [DinD (Docker in Docker)](/v3.0/tidb-in-kubernetes/get-started/deploy-tidb-from-kubernetes-dind.md)
Expand Down
2 changes: 1 addition & 1 deletion dev/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ TiDB can be deployed on-premise or in-cloud. The following deployment options ar
- [GKE (Google Kubernetes Engine)](/tidb-in-kubernetes/deploy/gcp-gke.md)
- [Google Cloud Shell](/tidb-in-kubernetes/get-started/deploy-tidb-from-kubernetes-gke.md)
- [Alibaba Cloud ACK (Container Service for Kubernetes)](/tidb-in-kubernetes/deploy/alibaba-cloud.md)

Or deploy TiDB locally using:

- [DinD (Docker in Docker)](/tidb-in-kubernetes/get-started/deploy-tidb-from-kubernetes-dind.md)
Expand Down
6 changes: 3 additions & 3 deletions dev/benchmark/dm-v1-alpha.md
Original file line number Diff line number Diff line change
Expand Up @@ -126,9 +126,9 @@ syncer:

#### DM key indicator monitor

![](/media/dm-benchmark-01.png)
![dm-benchmark-01](/media/dm-benchmark-01.png)

#### TiDB key indicator monitor

![](/media/dm-benchmark-02.png)
![](/media/dm-benchmark-03.png)
![dm-benchmark-02](/media/dm-benchmark-02.png)
![dm-benchmark-03](/media/dm-benchmark-03.png)
6 changes: 3 additions & 3 deletions dev/benchmark/how-to-run-sysbench.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ title: How to Test TiDB Using Sysbench
category: benchmark
---

# How to Test TiDB Using Sysbench
# How to Test TiDB Using Sysbench

In this test, Sysbench 1.0.14 and TiDB 3.0 Beta are used. It is recommended to use Sysbench 1.0 or later, which can be downloaded [here](https://github.com/akopytov/sysbench/releases/tag/1.0.14).

Expand Down Expand Up @@ -141,7 +141,7 @@ Restart MySQL client and execute the following SQL statement to create a databas
create database sbtest;
```

Adjust the order in which Sysbench scripts create indexes. Sysbench imports data in the order of "Build Table -> Insert Data -> Create Index", which takes more time for TiDB to import data. Users can adjust the order to speed up the import of data. Suppose that you use Sysbench version https://github.com/akopytov/sysbench/tree/1.0.14. You can adjust the order in the following two ways.
Adjust the order in which Sysbench scripts create indexes. Sysbench imports data in the order of "Build Table -> Insert Data -> Create Index", which takes more time for TiDB to import data. Users can adjust the order to speed up the import of data. Suppose that you use Sysbench version <https://github.com/akopytov/sysbench/tree/1.0.14>. You can adjust the order in the following two ways.

1. Download the TiDB-modified [oltp_common.lua](https://raw.githubusercontent.com/pingcap/tidb-bench/master/sysbench/sysbench-patch/oltp_common.lua) file and overwrite the `/usr/share/sysbench/oltp_common.lua` file with it.
2. Move the [235th](https://github.com/akopytov/sysbench/blob/1.0.14/src/lua/oltp_common.lua#L235) to [240th](https://github.com/akopytov/sysbench/blob/1.0.14/src/lua/oltp_common.lua#L240) lines of `/usr/share/sysbench/oltp_common.lua` to be right behind 198th lines.
Expand Down Expand Up @@ -214,7 +214,7 @@ Sysbench test was carried on each of the tidb-servers. And the final result was

| Type | Thread | TPS | QPS | avg.latency(ms) | .95.latency(ms) | max.latency(ms) |
|:---- |:---- |:---- |:---- |:----------------|:----------------- |:---- |
| oltp_update_index | 3\*8 | 9668.98 | 9668.98 | 2.51 | 3.19 | 103.88|
| oltp_update_index | 3\*8 | 9668.98 | 9668.98 | 2.51 | 3.19 | 103.88|
| oltp_update_index | 3\*16 | 12834.99 | 12834.99 | 3.79 | 5.47 | 176.90 |
| oltp_update_index | 3\*32 | 15955.77 | 15955.77 | 6.07 | 9.39 | 4787.14 |
| oltp_update_index | 3\*64 | 18697.17 | 18697.17 | 10.34 | 17.63 | 4539.04 |
Expand Down
28 changes: 14 additions & 14 deletions dev/benchmark/sysbench-v2.md
Original file line number Diff line number Diff line change
@@ -1,32 +1,32 @@
---
title: TiDB Sysbench Performance Test Report -- v2.0.0 vs. v1.0.0
category: benchmark
---
category: benchmark
---

# TiDB Sysbench Performance Test Report -- v2.0.0 vs. v1.0.0

## Test purpose

This test aims to compare the performances of TiDB 1.0 and TiDB 2.0.

## Test version, time, and place
## Test version, time, and place

TiDB version: v1.0.8 vs. v2.0.0-rc6
TiDB version: v1.0.8 vs. v2.0.0-rc6

Time: April 2018

Place: Beijing, China
Place: Beijing, China

## Test environment
IDC machine

IDC machine

| Type | Name |
| -------- | --------- |
| OS | linux (CentOS 7.3.1611) |
| CPU | 40 vCPUs, Intel(R) Xeon(R) CPU E5-2630 v4 @ 2.20GHz |
| RAM | 128GB |
| DISK | Optane 500GB SSD * 1 |
| DISK | Optane 500GB SSD * 1 |

## Test plan

Expand All @@ -51,19 +51,19 @@ IDC machine
### TiKV parameter configuration

- v1.0.8

```
sync-log = false
grpc-concurrency = 8
grpc-raft-conn-num = 24
grpc-raft-conn-num = 24
```

- v2.0.0-rc6

```
```
sync-log = false
grpc-concurrency = 8
grpc-raft-conn-num = 24
grpc-raft-conn-num = 24
use-delete-range: false
```

Expand All @@ -83,7 +83,7 @@ IDC machine

## Test result

### Standard `Select` test
### Standard `Select` test

| Version | Table count | Table size | Sysbench threads |QPS | Latency (avg/.95) |
| :---: | :---: | :---: | :---: | :---: | :---: |
Expand Down
2 changes: 1 addition & 1 deletion dev/benchmark/sysbench-v3.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ IDC machine:
| OS | Linux (CentOS 7.3.1611) |
| CPU | 40 vCPUs, Intel(R) Xeon(R) CPU E5-2630 v4 @ 2.20GHz |
| RAM | 128GB |
| DISK | Optane 500GB SSD \* 1 |
| DISK | Optane 500GB SSD \* 1 |

Sysbench version: 1.1.0

Expand Down
66 changes: 33 additions & 33 deletions dev/benchmark/sysbench.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,11 +31,11 @@ Place: Beijing
| OS | Linux (CentOS 7.3.1611) |
| CPU | 40 vCPUs, Intel(R) Xeon(R) CPU E5-2630 v4 @ 2.20GHz |
| RAM | 128GB |
| DISK | 1.5T SSD * 2 + Optane SSD * 1 |
| DISK | 1.5T SSD \* 2 + Optane SSD \* 1 |

- Sysbench version: 1.0.6

- Test script: https://github.com/pingcap/tidb-bench/tree/cwen/not_prepared_statement/sysbench.
- Test script: <https://github.com/pingcap/tidb-bench/tree/cwen/not_prepared_statement/sysbench>.

## Test scenarios

Expand All @@ -51,8 +51,8 @@ CREATE TABLE `sbtest` (
`pad` char(60) NOT NULL DEFAULT '',
PRIMARY KEY (`id`),
KEY `k_1` (`k`)
) ENGINE=InnoDB
```
) ENGINE=InnoDB
```

The deployment and configuration details:

Expand All @@ -64,14 +64,14 @@ The deployment and configuration details:
172.16.10.8 1*tidb 1*pd 1*sysbench
// Each physical node has three disks.
data3: 2 tikv (Optane SSD)
data3: 2 tikv (Optane SSD)
data2: 1 tikv
data1: 1 tikv
// TiKV configuration
sync-log = false
grpc-concurrency = 8
grpc-raft-conn-num = 24
grpc-raft-conn-num = 24
[defaultcf]
block-cache-size = "12GB"
[writecf]
Expand All @@ -81,23 +81,23 @@ block-cache-size = "2GB"
// MySQL deployment
// Use the semi-synchronous replication and asynchronous replication to deploy two replicas respectively.
172.16.20.4 master
172.16.20.6 slave
172.16.20.4 master
172.16.20.6 slave
172.16.20.7 slave
172.16.10.8 1*sysbench
172.16.10.8 1*sysbench
Mysql version: 5.6.37
// MySQL configuration
thread_cache_size = 64
innodb_buffer_pool_size = 64G
innodb_file_per_table = 1
innodb_flush_log_at_trx_commit = 0
datadir = /data3/mysql
max_connections = 2000
innodb_flush_log_at_trx_commit = 0
datadir = /data3/mysql
max_connections = 2000
```

- OLTP RW test

| - | Table count | Table size | Sysbench threads | TPS | QPS | Latency(avg / .95) |
| :---: | :---: | :---: | :---: | :---: | :---: | :---: |
| TiDB | 32 | 1 million | 64 * 4 | 3834 | 76692 | 67.04 ms / 110.88 ms |
Expand All @@ -111,9 +111,9 @@ max_connections = 2000
| Mysql | 32 | 5 million | 256 | 1902 | 38045 | 134.56 ms / 363.18 ms |
| Mysql | 32 | 10 million | 256 | 1770 | 35416 | 144.55 ms / 383.33 ms |

![](/media/sysbench-01.png)
![sysbench-01](/media/sysbench-01.png)

![](/media/sysbench-02.png)
![sysbench-02](/media/sysbench-02.png)

- `Select` RW test

Expand All @@ -130,9 +130,9 @@ max_connections = 2000
| Mysql | 32 | 5 million | 256 | 386866 | 0.66 ms / 1.64 ms |
| Mysql | 32 | 10 million | 256 | 388273 | 0.66 ms / 1.64 ms |

![](/media/sysbench-03.png)
![sysbench-03](/media/sysbench-03.png)

![](/media/sysbench-04.png)
![sysbench-04](/media/sysbench-04.png)

- `Insert` RW test

Expand All @@ -147,37 +147,37 @@ max_connections = 2000
| Mysql | 32 | 1 million | 128 | 14884 | 8.58 ms / 21.11 ms |
| Mysql | 32 | 1 million | 256 | 14508 | 17.64 ms / 44.98 ms |
| Mysql | 32 | 5 million | 256 | 10593 | 24.16 ms / 82.96 ms |
| Mysql | 32 | 10 million | 256 | 9813 | 26.08 ms / 94.10 ms |
![](/media/sysbench-05.png)
| Mysql | 32 | 10 million | 256 | 9813 | 26.08 ms / 94.10 ms |

![sysbench-05](/media/sysbench-05.png)

![](/media/sysbench-06.png)
![sysbench-06](/media/sysbench-06.png)

### Scenario two: TiDB horizontal scalability test

The deployment and configuration details:

```
// TiDB deployment
172.16.20.3 4*tikv
// TiDB deployment
172.16.20.3 4*tikv
172.16.10.2 1*tidb 1*pd 1*sysbench
// Each physical node has three disks.
data3: 2 tikv (Optane SSD)
data2: 1 tikv
data1: 1 tikv
// Each physical node has three disks.
data3: 2 tikv (Optane SSD)
data2: 1 tikv
data1: 1 tikv
// TiKV configuration
// TiKV configuration
sync-log = false
grpc-concurrency = 8
grpc-raft-conn-num = 24
grpc-raft-conn-num = 24
[defaultcf]
block-cache-size = "12GB"
[writecf]
block-cache-size = "5GB"
[raftdb.defaultcf]
block-cache-size = "2GB"
```
```

- OLTP RW test

Expand All @@ -188,7 +188,7 @@ block-cache-size = "2GB"
| 4 TiDB physical nodes | 32 | 1 million | 256 * 4 | 8984 | 179692 | 114.96 ms / 176.73 ms |
| 6 TiDB physical nodes | 32 | 5 million | 256 * 6 | 12953 | 259072 | 117.80 ms / 200.47 ms |

![](/media/sysbench-07.png)
![sysbench-07](/media/sysbench-07.png)

- `Select` RW test

Expand All @@ -199,7 +199,7 @@ block-cache-size = "2GB"
| 4 TiDB physical nodes | 32 | 1 million | 256 * 4 | 289933 | 3.53 ms / 8.74 ms |
| 6 TiDB physical nodes | 32 | 5 million | 256 * 6 | 435313 | 3.55 ms / 9.17 ms |

![](/media/sysbench-08.png)
![sysbench-08](/media/sysbench-08.png)

- `Insert` RW test

Expand All @@ -209,4 +209,4 @@ block-cache-size = "2GB"
| 5 TiKV physical nodes | 32 | 1 million | 256 * 3 | 60689 | 37.96 ms / 29.9 ms |
| 7 TiKV physical nodes | 32 | 1 million | 256 * 3 | 80087 | 9.62 ms / 21.37 ms |

![](/media/sysbench-09.png)
![sysbench-09](/media/sysbench-09.png)
2 changes: 1 addition & 1 deletion dev/benchmark/tpcc.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ IDC machine:
| OS | Linux (CentOS 7.3.1611) |
| CPU | 40 vCPUs, Intel(R) Xeon(R) CPU E5-2630 v4 @ 2.20GHz |
| RAM | 128GB |
| DISK | 1.5TB SSD \* 2 |
| DISK | 1.5TB SSD \* 2 |

This test uses the open-source BenchmarkSQL 5.0 as the TPC-C testing tool and adds the support for the MySQL protocol. You can download the testing program by using the following command:

Expand Down
2 changes: 1 addition & 1 deletion dev/benchmark/tpch.md
Original file line number Diff line number Diff line change
Expand Up @@ -102,7 +102,7 @@ TiDB 2.0:

It should be noted that:

- In the diagram above, the orange bars represent the query results of Release 1.0 and the blue bars represent the query results of Release 2.0. The y-axis represents the processing time of queries in seconds, the shorter the faster.
- In the diagram above, the orange bars represent the query results of Release 1.0 and the blue bars represent the query results of Release 2.0. The y-axis represents the processing time of queries in seconds, the shorter the faster.
- Query 15 is tagged with "NaN" because VIEW is currently not supported in either TiDB 1.0 or 2.0. We have plans to provide VIEW support in a future release.
- Queries 2, 17, and 19 in the TiDB 1.0 column are tagged with "NaN" because TiDB 1.0 did not return results for these queries.
- Queries 5, 7, 18, and 21 in the TiDB 1.0 column are tagged with "OOM" because the memory consumption was too high.
4 changes: 2 additions & 2 deletions dev/community.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,6 @@ category: community
# Connect with us

- **Twitter**: [@PingCAP](https://twitter.com/PingCAP)
- **Reddit**: https://www.reddit.com/r/TiDB/
- **Stack Overflow**: https://stackoverflow.com/questions/tagged/tidb
- **Reddit**: <https://www.reddit.com/r/TiDB/>
- **Stack Overflow**: <https://stackoverflow.com/questions/tagged/tidb>
- **Mailing list**: [Google Group](https://groups.google.com/forum/#!forum/tidb-user)
Loading

0 comments on commit f410df4

Please sign in to comment.