本文记录学些TiDB的过程,包括组件,安装,启动,停止,卸载等基本步骤。后续也将包括使用。
目录导航
组件和名词
TiDB
对外服务,解析,执行计划等
TiKV
存储引擎,负责数据。
TiFlash
列式的存储引擎
PD
元数据和集群状态管理,包括事务。建议单数节点以便于投票。
TiDB安装
下载tiup工具
[root@gbase_rh7_001 ~]# curl --proto '=https' --tlsv1.2 -sSf https://tiup-mirrors.pingcap.com/install.sh | sh
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 8697k 100 8697k 0 0 3168k 0 0:00:02 0:00:02 --:--:-- 3168k
WARN: adding root certificate via internet: https://tiup-mirrors.pingcap.com/root.json
You can revoke this by remove /root/.tiup/bin/7b8e153f2e2d0928.root.json
Set mirror to https://tiup-mirrors.pingcap.com success
Detected shell: bash
Shell profile: /root/.bash_profile
/root/.bash_profile has been modified to add tiup to PATH
open a new terminal or source /root/.bash_profile to use it
Installed path: /root/.tiup/bin/tiup
===============================================
Have a try: tiup playground
===============================================
[root@gbase_rh7_001 ~]#
设置环境变量
[root@gbase_rh7_001 ~]# source .bash_profile
[root@gbase_rh7_001 ~]# cat .bash_profile
# .bash_profile
# Get the aliases and functions
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi
# User specific environment and startup programs
PATH=$PATH:$HOME/bin
export PATH
export PATH=/root/.tiup/bin:$PATH
[root@gbase_rh7_001 ~]#
[root@gbase_rh7_001 ~]# which tiup
/root/.tiup/bin/tiup
[root@gbase_rh7_001 ~]#
通过tiup安装cluster
[root@gbase_rh7_001 ~]# tiup cluster
The component `cluster` is not installed; downloading from repository.
download https://tiup-mirrors.pingcap.com/cluster-v1.3.2-linux-amd64.tar.gz 10.05 MiB / 10.05 MiB 100.00% 3.98 MiB p/s
Starting component `cluster`: /root/.tiup/components/cluster/v1.3.2/tiup-cluster
Deploy a TiDB cluster for production
Usage:
tiup cluster [command]
Available Commands:
check Perform preflight checks for the cluster.
deploy Deploy a cluster for production
start Start a TiDB cluster
stop Stop a TiDB cluster
restart Restart a TiDB cluster
scale-in Scale in a TiDB cluster
scale-out Scale out a TiDB cluster
destroy Destroy a specified cluster
clean (EXPERIMENTAL) Cleanup a specified cluster
upgrade Upgrade a specified TiDB cluster
exec Run shell command on host in the tidb cluster
display Display information of a TiDB cluster
prune Destroy and remove instances that is in tombstone state
list List all clusters
audit Show audit log of cluster operation
import Import an exist TiDB cluster from TiDB-Ansible
edit-config Edit TiDB cluster config.
Will use editor from environment variable `EDITOR`, default use vi
reload Reload a TiDB cluster's config and restart if needed
patch Replace the remote package with a specified package and restart the service
rename Rename the cluster
enable Enable a TiDB cluster automatically at boot
disable Disable starting a TiDB cluster automatically at boot
help Help about any command
Flags:
-h, --help help for tiup
--ssh string (EXPERIMENTAL) The executor type: 'builtin', 'system', 'none'.
--ssh-timeout uint Timeout in seconds to connect host via SSH, ignored for operations that don't need an SSH connection. (default 5)
-v, --version version for tiup
--wait-timeout uint Timeout in seconds to wait for an operation to complete, ignored for operations that don't fit. (default 120)
-y, --yes Skip all confirmations and assumes 'yes'
Use "tiup cluster help [command]" for more information about a command.
查看tiup版本
[root@gbase_rh7_001 ~]# tiup --binary cluster
/root/.tiup/components/cluster/v1.3.2/tiup-cluster
[root@gbase_rh7_001 ~]#
查看TiDB当前可用版本
[root@gbase_rh7_001 ~]# tiup list tidb
Available versions for tidb:
Version Installed Release Platforms
------- --------- ------- ---------
nightly 2021-02-02T08:45:35+08:00 linux/arm64,darwin/amd64,linux/amd64
v3.0 2020-04-16T16:58:06+08:00 darwin/amd64,linux/amd64
v3.0.0 2020-04-16T14:03:31+08:00 darwin/amd64,linux/amd64
v3.0.1 2020-04-27T19:38:36+08:00 linux/arm64,darwin/amd64,linux/amd64
v3.0.2 2020-04-16T23:55:11+08:00 darwin/amd64,linux/amd64
v3.0.3 2020-04-17T00:16:31+08:00 darwin/amd64,linux/amd64
v3.0.4 2020-04-17T00:22:46+08:00 darwin/amd64,linux/amd64
v3.0.5 2020-04-17T00:29:45+08:00 darwin/amd64,linux/amd64
v3.0.6 2020-04-17T00:39:33+08:00 darwin/amd64,linux/amd64
v3.0.7 2020-04-17T00:46:32+08:00 darwin/amd64,linux/amd64
v3.0.8 2020-04-17T00:54:19+08:00 darwin/amd64,linux/amd64
v3.0.9 2020-04-17T01:00:58+08:00 darwin/amd64,linux/amd64
v3.0.10 2020-03-13T14:11:53.774527401+08:00 darwin/amd64,linux/amd64
v3.0.11 2020-04-17T01:09:20+08:00 darwin/amd64,linux/amd64
v3.0.12 2020-04-17T01:16:04+08:00 darwin/amd64,linux/amd64
v3.0.13 2020-04-26T17:25:01+08:00 darwin/amd64,linux/amd64
v3.0.14 2020-05-09T21:11:49+08:00 linux/arm64,darwin/amd64,linux/amd64
v3.0.15 2020-06-05T16:50:59+08:00 linux/arm64,darwin/amd64,linux/amd64
v3.0.16 2020-07-03T20:05:15+08:00 linux/arm64,darwin/amd64,linux/amd64
v3.0.17 2020-08-03T15:18:39+08:00 linux/arm64,darwin/amd64,linux/amd64
v3.0.18 2020-08-21T19:56:59+08:00 linux/arm64,darwin/amd64,linux/amd64
v3.0.19 2020-09-25T18:19:51+08:00 linux/arm64,darwin/amd64,linux/amd64
v3.0.20 2020-12-25T15:17:43+08:00 linux/arm64,darwin/amd64,linux/amd64
v3.1.0-beta 2020-05-22T14:35:59+08:00 linux/arm64,darwin/amd64,linux/amd64
v3.1.0-beta.1 2020-05-22T15:22:30+08:00 linux/arm64,darwin/amd64,linux/amd64
v3.1.0-beta.2 2020-05-22T15:28:20+08:00 linux/arm64,darwin/amd64,linux/amd64
v3.1.0-rc 2020-05-22T15:56:23+08:00 linux/arm64,darwin/amd64,linux/amd64
v3.1.0 2020-05-22T15:34:33+08:00 linux/arm64,darwin/amd64,linux/amd64
v3.1.1 2020-04-30T21:02:32+08:00 linux/arm64,darwin/amd64,linux/amd64
v3.1.2 2020-06-04T17:53:39+08:00 linux/arm64,darwin/amd64,linux/amd64
v4.0.0-beta 2020-05-26T11:18:05+08:00 linux/arm64,darwin/amd64,linux/amd64
v4.0.0-beta.1 2020-05-26T11:42:48+08:00 linux/arm64,darwin/amd64,linux/amd64
v4.0.0-beta.2 2020-05-26T11:56:51+08:00 linux/arm64,darwin/amd64,linux/amd64
v4.0.0-rc 2020-05-26T14:56:06+08:00 linux/arm64,darwin/amd64,linux/amd64
v4.0.0-rc.1 2020-04-29T01:03:31+08:00 linux/arm64,darwin/amd64,linux/amd64
v4.0.0-rc.2 2020-05-15T21:54:51+08:00 linux/arm64,darwin/amd64,linux/amd64
v4.0.0 2020-05-28T16:23:23+08:00 linux/arm64,darwin/amd64,linux/amd64
v4.0.1 2020-06-15T12:00:45+08:00 linux/arm64,darwin/amd64,linux/amd64
v4.0.2 2020-07-01T19:57:14+08:00 linux/arm64,darwin/amd64,linux/amd64
v4.0.3 2020-07-25T00:54:45+08:00 linux/arm64,darwin/amd64,linux/amd64
v4.0.4 2020-07-31T16:36:28+08:00 linux/arm64,darwin/amd64,linux/amd64
v4.0.5 2020-08-31T23:49:40+08:00 linux/arm64,darwin/amd64,linux/amd64
v4.0.6 2020-09-16T12:18:02+08:00 linux/arm64,darwin/amd64,linux/amd64
v4.0.7 2020-09-29T22:25:42+08:00 linux/arm64,darwin/amd64,linux/amd64
v4.0.8 2020-10-30T19:22:54+08:00 linux/arm64,darwin/amd64,linux/amd64
v4.0.9 2020-12-21T16:55:42+08:00 linux/arm64,darwin/amd64,linux/amd64
v4.0.10 2021-01-15T13:12:20+08:00 linux/arm64,darwin/amd64,linux/amd64
v5.0.0-rc 2021-01-12T23:23:23+08:00 linux/arm64,darwin/amd64,linux/amd64
[root@gbase_rh7_001 ~]#
tiDB安装配置文件
[root@gbase_rh7_001 ~]# vi topology.yaml
# # Global variables are applied to all deployments and used as the default value of
# # the deployments if a specific deployment value is missing.
global:
user: "tidb"
ssh_port: 22
deploy_dir: "/tidb-deploy"
data_dir: "/tidb-data"
server_configs:
pd:
replication.enable-placement-rules: true
pd_servers:
- host: 10.0.2.101
- host: 10.0.2.115
tidb_servers:
- host: 10.0.2.101
- host: 10.0.2.115
tikv_servers:
- host: 10.0.2.101
- host: 10.0.2.115
tiflash_servers:
- host: 10.0.2.101
- host: 10.0.2.115
data_dir: /tidb-data/tiflash-9000
deploy_dir: /tidb-deploy/tiflash-9000
monitoring_servers:
- host: 10.0.2.101
grafana_servers:
- host: 10.0.2.101
alertmanager_servers:
- host: 10.0.2.101
安装
[root@gbase_rh7_001 ~]# tiup cluster deploy tidb-test v4.0.10 ./topology.yaml --user root -p 111111
Starting component `cluster`: /root/.tiup/components/cluster/v1.3.2/tiup-cluster deploy tidb-test v4.0.10 ./topology.yaml --user root -p 111111
Please confirm your topology:
Cluster type: tidb
Cluster name: tidb-test
Cluster version: v4.0.10
Type Host Ports OS/Arch Directories
---- ---- ----- ------- -----------
pd 10.0.2.101 2379/2380 linux/x86_64 /tidb-deploy/pd-2379,/tidb-data/pd-2379
pd 10.0.2.115 2379/2380 linux/x86_64 /tidb-deploy/pd-2379,/tidb-data/pd-2379
tikv 10.0.2.101 20160/20180 linux/x86_64 /tidb-deploy/tikv-20160,/tidb-data/tikv-20160
tikv 10.0.2.115 20160/20180 linux/x86_64 /tidb-deploy/tikv-20160,/tidb-data/tikv-20160
tidb 10.0.2.101 4000/10080 linux/x86_64 /tidb-deploy/tidb-4000
tidb 10.0.2.115 4000/10080 linux/x86_64 /tidb-deploy/tidb-4000
tiflash 10.0.2.101 9000/8123/3930/20170/20292/8234 linux/x86_64 /tidb-deploy/tiflash-9000,/tidb-data/tiflash-9000
tiflash 10.0.2.115 9000/8123/3930/20170/20292/8234 linux/x86_64 /tidb-deploy/tiflash-9000,/tidb-data/tiflash-9000
prometheus 10.0.2.101 9090 linux/x86_64 /tidb-deploy/prometheus-9090,/tidb-data/prometheus-9090
grafana 10.0.2.101 3000 linux/x86_64 /tidb-deploy/grafana-3000
alertmanager 10.0.2.101 9093/9094 linux/x86_64 /tidb-deploy/alertmanager-9093,/tidb-data/alertmanager-9093
Attention:
1. If the topology is not what you expected, check your yaml file.
2. Please confirm there is no port/directory conflicts in same host.
Do you want to continue? [y/N]: y
Input SSH password:
+ Generate SSH keys ... Done
+ Download TiDB components
- Download pd:v4.0.10 (linux/amd64) ... Done
- Download tikv:v4.0.10 (linux/amd64) ... Done
- Download tidb:v4.0.10 (linux/amd64) ... Done
- Download tiflash:v4.0.10 (linux/amd64) ... Done
- Download prometheus:v4.0.10 (linux/amd64) ... Done
- Download grafana:v4.0.10 (linux/amd64) ... Done
- Download alertmanager:v0.17.0 (linux/amd64) ... Done
- Download node_exporter:v0.17.0 (linux/amd64) ... Done
- Download blackbox_exporter:v0.12.0 (linux/amd64) ... Done
+ Initialize target host environments
- Prepare 10.0.2.101:22 ... Done
- Prepare 10.0.2.115:22 ... Done
+ Copy files
- Copy pd -> 10.0.2.101 ... Done
- Copy pd -> 10.0.2.115 ... Done
- Copy tikv -> 10.0.2.101 ... Done
- Copy tikv -> 10.0.2.115 ... Done
- Copy tidb -> 10.0.2.101 ... Done
- Copy tidb -> 10.0.2.115 ... Done
- Copy tiflash -> 10.0.2.101 ... Done
- Copy tiflash -> 10.0.2.115 ... Done
- Copy prometheus -> 10.0.2.101 ... Done
- Copy grafana -> 10.0.2.101 ... Done
- Copy alertmanager -> 10.0.2.101 ... Done
- Copy node_exporter -> 10.0.2.101 ... Done
- Copy node_exporter -> 10.0.2.115 ... Done
- Copy blackbox_exporter -> 10.0.2.101 ... Done
- Copy blackbox_exporter -> 10.0.2.115 ... Done
+ Check status
Enabling component pd
Enabling instance pd 10.0.2.115:2379
Enabling instance pd 10.0.2.101:2379
Enable pd 10.0.2.115:2379 success
Enable pd 10.0.2.101:2379 success
Enabling component node_exporter
Enabling component blackbox_exporter
Enabling component node_exporter
Enabling component blackbox_exporter
Enabling component tikv
Enabling instance tikv 10.0.2.115:20160
Enabling instance tikv 10.0.2.101:20160
Enable tikv 10.0.2.101:20160 success
Enable tikv 10.0.2.115:20160 success
Enabling component tidb
Enabling instance tidb 10.0.2.115:4000
Enabling instance tidb 10.0.2.101:4000
Enable tidb 10.0.2.115:4000 success
Enable tidb 10.0.2.101:4000 success
Enabling component tiflash
Enabling instance tiflash 10.0.2.115:9000
Enabling instance tiflash 10.0.2.101:9000
Enable tiflash 10.0.2.101:9000 success
Enable tiflash 10.0.2.115:9000 success
Enabling component prometheus
Enabling instance prometheus 10.0.2.101:9090
Enable prometheus 10.0.2.101:9090 success
Enabling component grafana
Enabling instance grafana 10.0.2.101:3000
Enable grafana 10.0.2.101:3000 success
Enabling component alertmanager
Enabling instance alertmanager 10.0.2.101:9093
Enable alertmanager 10.0.2.101:9093 success
Cluster `tidb-test` deployed successfully, you can start it with command: `tiup cluster start tidb-test`
[root@gbase_rh7_001 ~]#
查看 TiUP 管理的集群情况
TiUP 支持管理多个 TiDB 集群,该命令会输出当前通过 TiUP cluster 管理的所有集群信息,包括集群名称、部署用户、版本、密钥信息等。
[root@gbase_rh7_001 ~]# tiup cluster list
Starting component `cluster`: /root/.tiup/components/cluster/v1.3.2/tiup-cluster list
Name User Version Path PrivateKey
---- ---- ------- ---- ----------
tidb-test tidb v4.0.10 /root/.tiup/storage/cluster/clusters/tidb-test /root/.tiup/storage/cluster/clusters/tidb-test/ssh/id_rsa
[root@gbase_rh7_001 ~]#
检查部署的 TiDB 集群情况
输出包括 tidb-test
集群中实例 ID、角色、主机、监听端口和状态(由于还未启动,所以状态为 Down/inactive)、目录信息。
[root@gbase_rh7_001 ~]# tiup cluster display tidb-test
Starting component `cluster`: /root/.tiup/components/cluster/v1.3.2/tiup-cluster display tidb-test
Cluster type: tidb
Cluster name: tidb-test
Cluster version: v4.0.10
SSH type: builtin
ID Role Host Ports OS/Arch Status Data Dir Deploy Dir
-- ---- ---- ----- ------- ------ -------- ----------
10.0.2.101:9093 alertmanager 10.0.2.101 9093/9094 linux/x86_64 inactive /tidb-data/alertmanager-9093 /tidb-deploy/alertmanager-9093
10.0.2.101:3000 grafana 10.0.2.101 3000 linux/x86_64 inactive - /tidb-deploy/grafana-3000
10.0.2.101:2379 pd 10.0.2.101 2379/2380 linux/x86_64 Down /tidb-data/pd-2379 /tidb-deploy/pd-2379
10.0.2.115:2379 pd 10.0.2.115 2379/2380 linux/x86_64 Down /tidb-data/pd-2379 /tidb-deploy/pd-2379
10.0.2.101:9090 prometheus 10.0.2.101 9090 linux/x86_64 inactive /tidb-data/prometheus-9090 /tidb-deploy/prometheus-9090
10.0.2.101:4000 tidb 10.0.2.101 4000/10080 linux/x86_64 Down - /tidb-deploy/tidb-4000
10.0.2.115:4000 tidb 10.0.2.115 4000/10080 linux/x86_64 Down - /tidb-deploy/tidb-4000
10.0.2.101:9000 tiflash 10.0.2.101 9000/8123/3930/20170/20292/8234 linux/x86_64 N/A /tidb-data/tiflash-9000 /tidb-deploy/tiflash-9000
10.0.2.115:9000 tiflash 10.0.2.115 9000/8123/3930/20170/20292/8234 linux/x86_64 N/A /tidb-data/tiflash-9000 /tidb-deploy/tiflash-9000
10.0.2.101:20160 tikv 10.0.2.101 20160/20180 linux/x86_64 N/A /tidb-data/tikv-20160 /tidb-deploy/tikv-20160
10.0.2.115:20160 tikv 10.0.2.115 20160/20180 linux/x86_64 N/A /tidb-data/tikv-20160 /tidb-deploy/tikv-20160
Total nodes: 11
[root@gbase_rh7_001 ~]#
启动tidb集群
[root@gbase_rh7_001 ~]# tiup cluster start tidb-test
Starting component `cluster`: /root/.tiup/components/cluster/v1.3.2/tiup-cluster start tidb-test
Starting cluster tidb-test...
+ [ Serial ] - SSHKeySet: privateKey=/root/.tiup/storage/cluster/clusters/tidb-test/ssh/id_rsa, publicKey=/root/.tiup/storage/cluster/clusters/tidb-test/ssh/id_rsa.pub
+ [Parallel] - UserSSH: user=tidb, host=10.0.2.101
+ [Parallel] - UserSSH: user=tidb, host=10.0.2.101
+ [Parallel] - UserSSH: user=tidb, host=10.0.2.115
+ [Parallel] - UserSSH: user=tidb, host=10.0.2.101
+ [Parallel] - UserSSH: user=tidb, host=10.0.2.115
+ [Parallel] - UserSSH: user=tidb, host=10.0.2.101
+ [Parallel] - UserSSH: user=tidb, host=10.0.2.115
+ [Parallel] - UserSSH: user=tidb, host=10.0.2.101
+ [Parallel] - UserSSH: user=tidb, host=10.0.2.115
+ [Parallel] - UserSSH: user=tidb, host=10.0.2.101
+ [Parallel] - UserSSH: user=tidb, host=10.0.2.101
+ [ Serial ] - StartCluster
Starting component pd
Starting instance pd 10.0.2.115:2379
Starting instance pd 10.0.2.101:2379
Start pd 10.0.2.115:2379 success
Start pd 10.0.2.101:2379 success
Starting component node_exporter
Starting instance 10.0.2.101
Start 10.0.2.101 success
Starting component blackbox_exporter
Starting instance 10.0.2.101
Start 10.0.2.101 success
Starting component node_exporter
Starting instance 10.0.2.115
Start 10.0.2.115 success
Starting component blackbox_exporter
Starting instance 10.0.2.115
Start 10.0.2.115 success
Starting component tikv
Starting instance tikv 10.0.2.115:20160
Starting instance tikv 10.0.2.101:20160
Start tikv 10.0.2.115:20160 success
Start tikv 10.0.2.101:20160 success
Starting component tidb
Starting instance tidb 10.0.2.115:4000
Starting instance tidb 10.0.2.101:4000
Start tidb 10.0.2.101:4000 success
Start tidb 10.0.2.115:4000 success
Starting component tiflash
Starting instance tiflash 10.0.2.101:9000
Starting instance tiflash 10.0.2.115:9000
Start tiflash 10.0.2.115:9000 success
Start tiflash 10.0.2.101:9000 success
Starting component prometheus
Starting instance prometheus 10.0.2.101:9090
Start prometheus 10.0.2.101:9090 success
Starting component grafana
Starting instance grafana 10.0.2.101:3000
Start grafana 10.0.2.101:3000 success
Starting component alertmanager
Starting instance alertmanager 10.0.2.101:9093
Start alertmanager 10.0.2.101:9093 success
+ [ Serial ] - UpdateTopology: cluster=tidb-test
Started cluster `tidb-test` successfully
[root@gbase_rh7_001 ~]#
验证集群运行状态
[root@gbase_rh7_001 ~]# tiup cluster display tidb-test
Starting component `cluster`: /root/.tiup/components/cluster/v1.3.2/tiup-cluster display tidb-test
Cluster type: tidb
Cluster name: tidb-test
Cluster version: v4.0.10
SSH type: builtin
Dashboard URL: http://10.0.2.101:2379/dashboard
ID Role Host Ports OS/Arch Status Data Dir Deploy Dir
-- ---- ---- ----- ------- ------ -------- ----------
10.0.2.101:9093 alertmanager 10.0.2.101 9093/9094 linux/x86_64 Up /tidb-data/alertmanager-9093 /tidb-deploy/alertmanager-9093
10.0.2.101:3000 grafana 10.0.2.101 3000 linux/x86_64 Up - /tidb-deploy/grafana-3000
10.0.2.101:2379 pd 10.0.2.101 2379/2380 linux/x86_64 Up|UI /tidb-data/pd-2379 /tidb-deploy/pd-2379
10.0.2.115:2379 pd 10.0.2.115 2379/2380 linux/x86_64 Up|L /tidb-data/pd-2379 /tidb-deploy/pd-2379
10.0.2.101:9090 prometheus 10.0.2.101 9090 linux/x86_64 Up /tidb-data/prometheus-9090 /tidb-deploy/prometheus-9090
10.0.2.101:4000 tidb 10.0.2.101 4000/10080 linux/x86_64 Up - /tidb-deploy/tidb-4000
10.0.2.115:4000 tidb 10.0.2.115 4000/10080 linux/x86_64 Up - /tidb-deploy/tidb-4000
10.0.2.101:9000 tiflash 10.0.2.101 9000/8123/3930/20170/20292/8234 linux/x86_64 Up /tidb-data/tiflash-9000 /tidb-deploy/tiflash-9000
10.0.2.115:9000 tiflash 10.0.2.115 9000/8123/3930/20170/20292/8234 linux/x86_64 Up /tidb-data/tiflash-9000 /tidb-deploy/tiflash-9000
10.0.2.101:20160 tikv 10.0.2.101 20160/20180 linux/x86_64 Up /tidb-data/tikv-20160 /tidb-deploy/tikv-20160
10.0.2.115:20160 tikv 10.0.2.115 20160/20180 linux/x86_64 Up /tidb-data/tikv-20160 /tidb-deploy/tikv-20160
Total nodes: 11
[root@gbase_rh7_001 ~]#
关闭tidb集群
[root@gbase_rh7_001 ~]# tiup cluster stop tidb-test
Starting component `cluster`: /root/.tiup/components/cluster/v1.3.2/tiup-cluster stop tidb-test
+ [ Serial ] - SSHKeySet: privateKey=/root/.tiup/storage/cluster/clusters/tidb-test/ssh/id_rsa, publicKey=/root/.tiup/storage/cluster/clusters/tidb-test/ssh/id_rsa.pub
+ [Parallel] - UserSSH: user=tidb, host=10.0.2.101
+ [Parallel] - UserSSH: user=tidb, host=10.0.2.101
+ [Parallel] - UserSSH: user=tidb, host=10.0.2.115
+ [Parallel] - UserSSH: user=tidb, host=10.0.2.101
+ [Parallel] - UserSSH: user=tidb, host=10.0.2.115
+ [Parallel] - UserSSH: user=tidb, host=10.0.2.101
+ [Parallel] - UserSSH: user=tidb, host=10.0.2.115
+ [Parallel] - UserSSH: user=tidb, host=10.0.2.101
+ [Parallel] - UserSSH: user=tidb, host=10.0.2.115
+ [Parallel] - UserSSH: user=tidb, host=10.0.2.101
+ [Parallel] - UserSSH: user=tidb, host=10.0.2.101
+ [ Serial ] - StopCluster
Stopping component alertmanager
Stopping instance 10.0.2.101
Stop alertmanager 10.0.2.101:9093 success
Stopping component grafana
Stopping instance 10.0.2.101
Stop grafana 10.0.2.101:3000 success
Stopping component prometheus
Stopping instance 10.0.2.101
Stop prometheus 10.0.2.101:9090 success
Stopping component tiflash
Stopping instance 10.0.2.115
Stopping instance 10.0.2.101
Stop tiflash 10.0.2.115:9000 success
Stop tiflash 10.0.2.101:9000 success
Stopping component tidb
Stopping instance 10.0.2.115
Stopping instance 10.0.2.101
Stop tidb 10.0.2.115:4000 success
Stop tidb 10.0.2.101:4000 success
Stopping component tikv
Stopping instance 10.0.2.115
Stopping instance 10.0.2.101
Stop tikv 10.0.2.115:20160 success
Stop tikv 10.0.2.101:20160 success
Stopping component pd
Stopping instance 10.0.2.115
Stopping instance 10.0.2.101
Stop pd 10.0.2.101:2379 success
Stop pd 10.0.2.115:2379 success
Stopping component node_exporter
Stopping component blackbox_exporter
Stopping component node_exporter
Stopping component blackbox_exporter
Stopped cluster `tidb-test` successfully
[root@gbase_rh7_001 ~]# d
卸载tidb集群
[root@gbase_rh7_001 ~]# tiup uninstall --all
Uninstalled all components successfully!
记得清理残余的目录
rm -fr /tidb-d*