Skip to content

Commit

Permalink
clusterinfo: Refine (#815)
Browse files Browse the repository at this point in the history
Signed-off-by: Breezewish <me@breeswish.org>
  • Loading branch information
breezewish authored Nov 26, 2020
1 parent 0b42238 commit 2ac8400
Show file tree
Hide file tree
Showing 28 changed files with 1,471 additions and 643 deletions.
24 changes: 13 additions & 11 deletions etc/manualTestEnv/_shared/Vagrantfile.partial.pubKey.rb
Original file line number Diff line number Diff line change
Expand Up @@ -2,23 +2,25 @@
ssh_pub_key = File.readlines("#{File.dirname(__FILE__)}/vagrant_key.pub").first.strip

config.vm.box = "hashicorp/bionic64"
config.vm.provision "shell", privileged: false, inline: <<-SHELL
config.vm.provision "zsh", type: "shell", privileged: false, inline: <<-SHELL
echo "Installing zsh"
sudo apt install -y zsh
sh -c "$(curl -fsSL https://raw.githubusercontent.com/ohmyzsh/ohmyzsh/master/tools/install.sh)"
sudo chsh -s /usr/bin/zsh vagrant
SHELL

config.vm.provision "private_key", type: "shell", privileged: false, inline: <<-SHELL
echo "Inserting private key"
echo #{ssh_pub_key} >> /home/vagrant/.ssh/authorized_keys
SHELL

config.vm.provision "shell", privileged: true, inline: <<-SHELL
echo "setting ulimit"
sudo echo "fs.file-max = 65535" >> /etc/sysctl.conf
sudo sysctl -p
sudo echo "* hard nofile 65535" >> /etc/security/limits.conf
sudo echo "* soft nofile 65535" >> /etc/security/limits.conf
sudo echo "root hard nofile 65535" >> /etc/security/limits.conf
sudo echo "root hard nofile 65535" >> /etc/security/limits.conf
config.vm.provision "ulimit", type: "shell", privileged: true, inline: <<-SHELL
echo "Setting ulimit"
echo "fs.file-max = 65535" >> /etc/sysctl.conf
sysctl -p
echo "* hard nofile 65535" >> /etc/security/limits.conf
echo "* soft nofile 65535" >> /etc/security/limits.conf
echo "root hard nofile 65535" >> /etc/security/limits.conf
echo "root hard nofile 65535" >> /etc/security/limits.conf
SHELL
end

# ulimit ref: https://my.oschina.net/u/914655/blog/3067520
36 changes: 36 additions & 0 deletions etc/manualTestEnv/complexCase1/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
# complexCase1

TiDB, PD, TiKV, TiFlash each in different hosts.

## Usage

1. Start the box:

```bash
VAGRANT_EXPERIMENTAL="disks" vagrant up
```

1. Use [TiUP](https://tiup.io/) to deploy the cluster to the box (only need to do it once):

```bash
tiup cluster deploy complexCase1 v4.0.8 topology.yaml -i ../_shared/vagrant_key -y --user vagrant
```

1. Start the cluster in the box:

```bash
tiup cluster start complexCase1
```

1. Start TiDB Dashboard server:

```bash
bin/tidb-dashboard --pd http://10.0.1.31:2379
```

## Cleanup

```bash
tiup cluster destroy complexCase1 -y
vagrant destroy --force
```
40 changes: 40 additions & 0 deletions etc/manualTestEnv/complexCase1/Vagrantfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,40 @@
load "#{File.dirname(__FILE__)}/../_shared/Vagrantfile.partial.pubKey.rb"

Vagrant.configure("2") do |config|
config.vm.provider "virtualbox" do |v|
v.memory = 1024
v.cpus = 1
end

(1..5).each do |i|
config.vm.define "node#{i}" do |node|
node.vm.network "private_network", ip: "10.0.1.#{i+30}"
(1..4).each do |j|
node.vm.disk :disk, size: "10GB", name: "disk-#{i}-#{j}"
end
end
end

config.vm.provision "disk", type: "shell", privileged: false, inline: <<-SHELL
echo "Formatting disks"
sudo mkfs.ext4 -j -L hdd1 /dev/sdb
sudo mkfs.ext4 -j -L hdd2 /dev/sdc
sudo mkfs.ext4 -j -L hdd3 /dev/sdd
sudo mkfs.ext4 -j -L hdd4 /dev/sde
echo "Mounting directories"
sudo mkdir -p /pingcap/tidb-data
echo "/dev/sdb /pingcap/tidb-data ext4 defaults 0 0" | sudo tee -a /etc/fstab
sudo mount /pingcap/tidb-data
sudo mkdir -p /pingcap/tidb-deploy
sudo mkdir -p /pingcap/tidb-data/tikv-1
sudo mkdir -p /pingcap/tidb-data/tikv-2
echo "/dev/sdc /pingcap/tidb-deploy ext4 defaults 0 0" | sudo tee -a /etc/fstab
echo "/dev/sdd /pingcap/tidb-data/tikv-1 ext4 defaults 0 0" | sudo tee -a /etc/fstab
echo "/dev/sde /pingcap/tidb-data/tikv-2 ext4 defaults 0 0" | sudo tee -a /etc/fstab
sudo mount /pingcap/tidb-deploy
sudo mount /pingcap/tidb-data/tikv-1
sudo mount /pingcap/tidb-data/tikv-2
SHELL
end
85 changes: 85 additions & 0 deletions etc/manualTestEnv/complexCase1/topology.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,85 @@
global:
user: tidb
deploy_dir: /pingcap/tidb-deploy
data_dir: /pingcap/tidb-data

server_configs:
tikv:
server.grpc-concurrency: 1
raftstore.apply-pool-size: 1
raftstore.store-pool-size: 1
readpool.unified.max-thread-count: 1
readpool.storage.use-unified-pool: false
readpool.coprocessor.use-unified-pool: true
storage.block-cache.capacity: 256MB
raftstore.capacity: 5GB

# Overview:
# 31: 1 PD, 1 TiDB, 2 TiKV
# 32: 1 TiDB, 2 TiKV
# 33: 1 PD, 1 TiFlash
# 34: 2 TiKV, 1 TiFlash
# 35: 1 TiFlash

pd_servers:
- host: 10.0.1.31
- host: 10.0.1.33

tikv_servers:
- host: 10.0.1.31
port: 20160
status_port: 20180
data_dir: /pingcap/tidb-data/tikv-1/tikv-20160
config:
server.labels: { host: "tikv1" }
- host: 10.0.1.31
port: 20161
status_port: 20181
data_dir: /pingcap/tidb-data/tikv-2/tikv-20161
config:
server.labels: { host: "tikv2" }
- host: 10.0.1.32
port: 20160
status_port: 20180
data_dir: /pingcap/tidb-data/tikv-1/tikv-20160
config:
server.labels: { host: "tikv1" }
- host: 10.0.1.32
port: 20161
status_port: 20181
data_dir: /pingcap/tidb-data/tikv-2/tikv-20161
config:
server.labels: { host: "tikv2" }
- host: 10.0.1.34
port: 20160
status_port: 20180
data_dir: /pingcap/tidb-data/tikv-1/tikv-20160
config:
server.labels: { host: "tikv1" }
- host: 10.0.1.34
port: 20161
status_port: 20181
data_dir: /pingcap/tidb-data/tikv-2/tikv-20161
config:
server.labels: { host: "tikv2" }

tiflash_servers:
- host: 10.0.1.33
data_dir: /pingcap/tidb-data/tikv-1/tiflash
- host: 10.0.1.34
data_dir: /pingcap/tidb-data/tikv-2/tiflash
- host: 10.0.1.35
data_dir: /pingcap/tidb-data/tikv-1/tiflash

tidb_servers:
- host: 10.0.1.31
- host: 10.0.1.32

grafana_servers:
- host: 10.0.1.31

monitoring_servers:
- host: 10.0.1.31

alertmanager_servers:
- host: 10.0.1.31
2 changes: 1 addition & 1 deletion etc/manualTestEnv/multiHost/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ TiDB, PD, TiKV, TiFlash each in different hosts.
1. Use [TiUP](https://tiup.io/) to deploy the cluster to the box (only need to do it once):

```bash
tiup cluster deploy multiHost v4.0.4 topology.yaml -i ../_shared/vagrant_key -y --user vagrant
tiup cluster deploy multiHost v4.0.8 topology.yaml -i ../_shared/vagrant_key -y --user vagrant
```

1. Start the cluster in the box:
Expand Down
2 changes: 1 addition & 1 deletion etc/manualTestEnv/multiReplica/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ Multiple TiKV nodes in different labels.
1. Use [TiUP](https://tiup.io/) to deploy the cluster to the box (only need to do it once):

```bash
tiup cluster deploy multiReplica v4.0.4 topology.yaml -i ../_shared/vagrant_key -y --user vagrant
tiup cluster deploy multiReplica v4.0.8 topology.yaml -i ../_shared/vagrant_key -y --user vagrant
```

1. Start the cluster in the box:
Expand Down
2 changes: 1 addition & 1 deletion etc/manualTestEnv/singleHost/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ TiDB, PD, TiKV, TiFlash in the same host.
1. Use [TiUP](https://tiup.io/) to deploy the cluster to the box (only need to do it once):

```bash
tiup cluster deploy singleHost v4.0.4 topology.yaml -i ../_shared/vagrant_key -y --user vagrant
tiup cluster deploy singleHost v4.0.8 topology.yaml -i ../_shared/vagrant_key -y --user vagrant
```

1. Start the cluster in the box:
Expand Down
2 changes: 1 addition & 1 deletion etc/manualTestEnv/singleHostMultiDisk/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ All instances in a single host, but on different disks.
1. Use [TiUP](https://tiup.io/) to deploy the cluster to the box (only need to do it once):

```bash
tiup cluster deploy singleHostMultiDisk v4.0.4 topology.yaml -i ../_shared/vagrant_key -y --user vagrant
tiup cluster deploy singleHostMultiDisk v4.0.8 topology.yaml -i ../_shared/vagrant_key -y --user vagrant
```

1. Start the cluster in the box:
Expand Down
Loading

0 comments on commit 2ac8400

Please sign in to comment.