Greenplum 新版本落户apache,改名cloudberry, 本文介绍在4台centos 8.5,部署Cloudberry 1.6.0 build 1 版本的过程。
目录导航
环境
采用4台虚拟机,因自己机器内存少,所以只分配了2G内存,2C的CPU。
10.0.2.181 vm181
10.0.2.182 vm182
10.0.2.183 vm183
10.0.2.184 vm184
- OS 版本:CentOS 8.5.2111 X86 64
- CPU: 2核
- 内存:2GB
- cloudberry版本 1.6.0 build1(内置PostgreSQL版本 14.4)
准备
主机名解析hosts
每台机器都配置上,后面用主机名部署服务。
[root@vm184 ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
10.0.2.181 vm181
10.0.2.182 vm182
10.0.2.183 vm183
10.0.2.184 vm184
/etc/security/limits.conf
每台机器增加如下配置
* soft nofile 524288
* hard nofile 524288
* soft nproc 131072
* hard nproc 131072
创建用户
每台机器上创建gpadmin用户
[root@vm180 ~]# useradd gpadmin
[root@vm180 ~]# passwd gpadmin
创建数据segment目录并授权
每台机器都操作。
[root@vm181 ~]# mkdir -p /data0
[root@vm181 ~]# chown -R gpadmin:gpadmin /data0
创建gpadmin用户互信
在一台机器上操作即可,本例为第一台vm181上。
[root@vm181 ~]# su - gpadmin
[gpadmin@vm181 ~]$ ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/home/gpadmin/.ssh/id_rsa):
Created directory '/home/gpadmin/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/gpadmin/.ssh/id_rsa.
Your public key has been saved in /home/gpadmin/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:JT1DExhwyLcvbRzFWAOY96bZvinq8Csylb7GdSIHOM0 gpadmin@vm181
The key's randomart image is:
+---[RSA 3072]----+
| ..oo==*o |
| o.=oo.o. |
| + ..o=o |
| o E .o.oo |
| . oS+ * |
| + = O . |
| +.+ = . |
| o =o . .. |
| +.+=o .o. |
+----[SHA256]-----+
[gpadmin@vm181 ~]$ ssh-copy-id gpadmin@10.0.2.181
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/home/gpadmin/.ssh/id_rsa.pub"
The authenticity of host '10.0.2.181 (10.0.2.181)' can't be established.
ECDSA key fingerprint is SHA256:JR5GC+5F4EX7GB4k+9T8royjkdrrUdqCvChLwh/NZUw.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
gpadmin@10.0.2.181's password:
Number of key(s) added: 1
Now try logging into the machine, with: "ssh 'gpadmin@10.0.2.181'"
and check to make sure that only the key(s) you wanted were added.
[gpadmin@vm181 ~]$ ssh-copy-id gpadmin@10.0.2.182
[gpadmin@vm181 ~]$ ssh-copy-id gpadmin@10.0.2.183
[gpadmin@vm181 ~]$ ssh-copy-id gpadmin@10.0.2.184
安装RPM
每台机器,用root用户运行。
[root@vm181 ~]# cd /home/gpadmin/
[root@vm181 gpadmin]# yum install ./cloudberry-db-1.6.0-1.el8.x86_64.rpm
期间会有大量依赖库要下载和更新。如果repo连不上,可以改成aliyun的,并把以前的都删了。
wget -O /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-vault-8.5.2111.repo
rm -f /etc/yum.repos.d/CentOS-Linux-*
yum clean all
yum makecache
更新安装目录的属主
chown -R gpadmin:gpadmin /usr/local/cloudberry*
实际影响的是如下2个文件和子目录
[gpadmin@vm181 ~]$ ll /usr/local/cloudberry* -d
lrwxrwxrwx 1 gpadmin gpadmin 30 Sep 3 2024 /usr/local/cloudberry-db -> /usr/local/cloudberry-db-1.6.0
drwxr-xr-x 9 gpadmin gpadmin 135 Apr 8 10:03 /usr/local/cloudberry-db-1.6.0
[gpadmin@vm181 ~]$
配置coordinator和segment
在某台机器,本文为vm181,执行本节操作。
本例中,4个节点全部用于segment,前2个用于coordinator和standby
所有节点 all_hosts列表
[gpadmin@vm181 ~]$ cat all_hosts
vm181
vm182
vm183
vm184
所有segment节点列表
[gpadmin@vm181 ~]$ cat seg_hosts
vm181
vm182
vm183
vm184
创建segment的primary和mirror
source一下路径,后面会将其加入到环境变量里。
[gpadmin@vm180 ~]$ source /usr/local/cloudberry-db/greenplum_path.sh
[gpadmin@vm181 ~]$ gpssh -f seg_hosts
=> mkdir -p /data0/primary/
[vm181]
[vm184]
[vm182]
[vm183]
=> mkdir -p /data0/mirror/
[vm181]
[vm184]
[vm182]
[vm183]
=> exit
配置coordinator+standby的数据目录
standby用gpssh建的目录,也可以直接连vm182过去建。
[gpadmin@vm181 ~]$ mkdir -p /data0/coordinator/
[gpadmin@vm181 ~]$ gpssh -h vm182 -e 'mkdir -p /data0/coordinator/'
配置环境变量coordinator+standby
只在coordinator和standby这2个节点的~/.bashrc的末尾增加如下内容
source /usr/local/cloudberry-db/greenplum_path.sh
export COORDINATOR_DATA_DIRECTORY=/data0/coordinator/gpseg-1
source生效
[gpadmin@vm181 ~]$ source ~/.bashrc
初始化
在coordinator 节点vm181上执行。
初始化配置文件
如下带颜色部分是改动的内容
- DATA_DIRECTORY Segment 计算节点的数据目录。每个目录会创建1个segment。目录可以相同(放一块),也可以不同。
- COORDINATOR_HOSTNAME 主节点的主机名字
- COORDINATOR_DIRECTORY 主节点数据存储目录
- MIRROR_PORT_BASE 备节点端口
- MIRROR_DATA_DIRECTORY 备节点数据目录
- DATABASE_NAME 初始化完成后,创建这个数据库
[gpadmin@vm181 ~]$ cp $GPHOME/docs/cli_help/gpconfigs/gpinitsystem_config .
[gpadmin@vm181 ~]$ vi gpinitsystem_config
[gpadmin@vm181 ~]$ grep -v ^# gpinitsystem_config |grep -v '^$'
SEG_PREFIX=gpseg
PORT_BASE=6000
declare -a DATA_DIRECTORY=(/data0/primary)
COORDINATOR_HOSTNAME=vm181
COORDINATOR_DIRECTORY=/data0/coordinator
COORDINATOR_PORT=5432
TRUSTED_SHELL=ssh
ENCODING=UNICODE
MIRROR_PORT_BASE=7000
declare -a MIRROR_DATA_DIRECTORY=(/data0/mirror)
DATABASE_NAME=warehouse
[gpadmin@vm181 ~]$
初始化
[gpadmin@vm181 ~]$ gpinitsystem -c gpinitsystem_config -h /home/gpadmin/seg_hosts
20250408:10:45:46:053136 gpinitsystem:vm181:gpadmin-[INFO]:-Checking configuration parameters, please wait...
20250408:10:45:46:053136 gpinitsystem:vm181:gpadmin-[INFO]:-Reading CloudberryDB configuration file gpinitsystem_config
20250408:10:45:46:053136 gpinitsystem:vm181:gpadmin-[INFO]:-Locale has not been set in gpinitsystem_config, will set to default value
20250408:10:45:46:053136 gpinitsystem:vm181:gpadmin-[INFO]:-COORDINATOR_MAX_CONNECT not set, will set to default value 250
20250408:10:45:46:053136 gpinitsystem:vm181:gpadmin-[INFO]:-Checking configuration parameters, Completed
20250408:10:45:46:053136 gpinitsystem:vm181:gpadmin-[INFO]:-Commencing multi-home checks, please wait...
....
20250408:10:45:48:053136 gpinitsystem:vm181:gpadmin-[INFO]:-Configuring build for standard array
20250408:10:45:49:053136 gpinitsystem:vm181:gpadmin-[INFO]:-Commencing multi-home checks, Completed
20250408:10:45:49:053136 gpinitsystem:vm181:gpadmin-[INFO]:-Building primary segment instance array, please wait...
....
20250408:10:45:51:053136 gpinitsystem:vm181:gpadmin-[INFO]:-Building group mirror array type , please wait...
....
20250408:10:45:54:053136 gpinitsystem:vm181:gpadmin-[INFO]:-Checking Coordinator host
20250408:10:45:54:053136 gpinitsystem:vm181:gpadmin-[INFO]:-Checking new segment hosts, please wait...
........
20250408:10:46:06:053136 gpinitsystem:vm181:gpadmin-[INFO]:-Checking new segment hosts, Completed
20250408:10:46:06:053136 gpinitsystem:vm181:gpadmin-[INFO]:-Cloudberry Database Creation Parameters
20250408:10:46:06:053136 gpinitsystem:vm181:gpadmin-[INFO]:---------------------------------------
20250408:10:46:06:053136 gpinitsystem:vm181:gpadmin-[INFO]:-Coordinator Configuration
20250408:10:46:06:053136 gpinitsystem:vm181:gpadmin-[INFO]:---------------------------------------
20250408:10:46:06:053136 gpinitsystem:vm181:gpadmin-[INFO]:-Coordinator hostname = vm181
20250408:10:46:06:053136 gpinitsystem:vm181:gpadmin-[INFO]:-Coordinator port = 5432
20250408:10:46:06:053136 gpinitsystem:vm181:gpadmin-[INFO]:-Coordinator instance dir = /data0/coordinator/gpseg-1
20250408:10:46:06:053136 gpinitsystem:vm181:gpadmin-[INFO]:-Coordinator LOCALE =
20250408:10:46:06:053136 gpinitsystem:vm181:gpadmin-[INFO]:-CloudberryDB segment prefix = gpseg
20250408:10:46:06:053136 gpinitsystem:vm181:gpadmin-[INFO]:-Coordinator Database = warehouse
20250408:10:46:06:053136 gpinitsystem:vm181:gpadmin-[INFO]:-Coordinator connections = 250
20250408:10:46:06:053136 gpinitsystem:vm181:gpadmin-[INFO]:-Coordinator buffers = 128000kB
20250408:10:46:06:053136 gpinitsystem:vm181:gpadmin-[INFO]:-Segment connections = 750
20250408:10:46:06:053136 gpinitsystem:vm181:gpadmin-[INFO]:-Segment buffers = 128000kB
20250408:10:46:06:053136 gpinitsystem:vm181:gpadmin-[INFO]:-Encoding = UNICODE
20250408:10:46:06:053136 gpinitsystem:vm181:gpadmin-[INFO]:-Postgres param file = Off
20250408:10:46:06:053136 gpinitsystem:vm181:gpadmin-[INFO]:-Initdb to be used = /usr/local/cloudberry-db-1.6.0/bin/initdb
20250408:10:46:06:053136 gpinitsystem:vm181:gpadmin-[INFO]:-GP_LIBRARY_PATH is = /usr/local/cloudberry-db-1.6.0/lib
20250408:10:46:06:053136 gpinitsystem:vm181:gpadmin-[INFO]:-HEAP_CHECKSUM is = on
20250408:10:46:06:053136 gpinitsystem:vm181:gpadmin-[INFO]:-HBA_HOSTNAMES is = 0
20250408:10:46:06:053136 gpinitsystem:vm181:gpadmin-[INFO]:-Ulimit check = Passed
20250408:10:46:06:053136 gpinitsystem:vm181:gpadmin-[INFO]:-Array host connect type = Single hostname per node
20250408:10:46:06:053136 gpinitsystem:vm181:gpadmin-[INFO]:-Coordinator IP address [1] = ::1
20250408:10:46:06:053136 gpinitsystem:vm181:gpadmin-[INFO]:-Coordinator IP address [2] = 10.0.2.181
20250408:10:46:06:053136 gpinitsystem:vm181:gpadmin-[INFO]:-Coordinator IP address [3] = 192.168.122.1
20250408:10:46:06:053136 gpinitsystem:vm181:gpadmin-[INFO]:-Coordinator IP address [4] = fe80::edea:b0b9:f4e4:fee
20250408:10:46:06:053136 gpinitsystem:vm181:gpadmin-[INFO]:-Standby Coordinator = Not Configured
20250408:10:46:06:053136 gpinitsystem:vm181:gpadmin-[INFO]:-Number of primary segments = 1
20250408:10:46:06:053136 gpinitsystem:vm181:gpadmin-[INFO]:-Total Database segments = 4
20250408:10:46:06:053136 gpinitsystem:vm181:gpadmin-[INFO]:-Trusted shell = ssh
20250408:10:46:06:053136 gpinitsystem:vm181:gpadmin-[INFO]:-Number segment hosts = 4
20250408:10:46:06:053136 gpinitsystem:vm181:gpadmin-[INFO]:-Mirror port base = 7000
20250408:10:46:06:053136 gpinitsystem:vm181:gpadmin-[INFO]:-Number of mirror segments = 1
20250408:10:46:06:053136 gpinitsystem:vm181:gpadmin-[INFO]:-Mirroring config = ON
20250408:10:46:06:053136 gpinitsystem:vm181:gpadmin-[INFO]:-Mirroring type = Group
20250408:10:46:06:053136 gpinitsystem:vm181:gpadmin-[INFO]:----------------------------------------
20250408:10:46:06:053136 gpinitsystem:vm181:gpadmin-[INFO]:-CloudberryDB Primary Segment Configuration
20250408:10:46:06:053136 gpinitsystem:vm181:gpadmin-[INFO]:----------------------------------------
20250408:10:46:06:053136 gpinitsystem:vm181:gpadmin-[INFO]:-vm181 6000 vm181 /data0/primary/gpseg0 2
20250408:10:46:06:053136 gpinitsystem:vm181:gpadmin-[INFO]:-vm182 6000 vm182 /data0/primary/gpseg1 3
20250408:10:46:06:053136 gpinitsystem:vm181:gpadmin-[INFO]:-vm183 6000 vm183 /data0/primary/gpseg2 4
20250408:10:46:06:053136 gpinitsystem:vm181:gpadmin-[INFO]:-vm184 6000 vm184 /data0/primary/gpseg3 5
20250408:10:46:06:053136 gpinitsystem:vm181:gpadmin-[INFO]:---------------------------------------
20250408:10:46:06:053136 gpinitsystem:vm181:gpadmin-[INFO]:-CloudberryDB Mirror Segment Configuration
20250408:10:46:06:053136 gpinitsystem:vm181:gpadmin-[INFO]:---------------------------------------
20250408:10:46:06:053136 gpinitsystem:vm181:gpadmin-[INFO]:-vm182 7000 vm182 /data0/mirror/gpseg0 6
20250408:10:46:06:053136 gpinitsystem:vm181:gpadmin-[INFO]:-vm183 7000 vm183 /data0/mirror/gpseg1 7
20250408:10:46:06:053136 gpinitsystem:vm181:gpadmin-[INFO]:-vm184 7000 vm184 /data0/mirror/gpseg2 8
20250408:10:46:06:053136 gpinitsystem:vm181:gpadmin-[INFO]:-vm181 7000 vm181 /data0/mirror/gpseg3 9
Continue with CloudberryDB creation Yy|Nn (default=N):
> y
20250408:10:46:15:053136 gpinitsystem:vm181:gpadmin-[INFO]:-Building the Coordinator instance database, please wait...
20250408:10:46:19:053136 gpinitsystem:vm181:gpadmin-[INFO]:-Starting the Coordinator in admin mode
20250408:10:46:20:053136 gpinitsystem:vm181:gpadmin-[INFO]:-Commencing parallel build of primary segment instances
20250408:10:46:20:053136 gpinitsystem:vm181:gpadmin-[INFO]:-Spawning parallel processes batch [1], please wait...
....
20250408:10:46:20:053136 gpinitsystem:vm181:gpadmin-[INFO]:-Waiting for parallel processes batch [1], please wait...
..............................................
20250408:10:47:07:053136 gpinitsystem:vm181:gpadmin-[INFO]:------------------------------------------------
20250408:10:47:07:053136 gpinitsystem:vm181:gpadmin-[INFO]:-Parallel process exit status
20250408:10:47:07:053136 gpinitsystem:vm181:gpadmin-[INFO]:------------------------------------------------
20250408:10:47:07:053136 gpinitsystem:vm181:gpadmin-[INFO]:-Total processes marked as completed = 4
20250408:10:47:07:053136 gpinitsystem:vm181:gpadmin-[INFO]:-Total processes marked as killed = 0
20250408:10:47:07:053136 gpinitsystem:vm181:gpadmin-[INFO]:-Total processes marked as failed = 0
20250408:10:47:07:053136 gpinitsystem:vm181:gpadmin-[INFO]:------------------------------------------------
20250408:10:47:07:053136 gpinitsystem:vm181:gpadmin-[INFO]:-Removing back out file
20250408:10:47:07:053136 gpinitsystem:vm181:gpadmin-[INFO]:-No errors generated from parallel processes
20250408:10:47:07:053136 gpinitsystem:vm181:gpadmin-[INFO]:-Restarting the CloudberryDB instance in production mode
20250408:10:47:07:059778 gpstop:vm181:gpadmin-[INFO]:-Starting gpstop with args: -a -l /home/gpadmin/gpAdminLogs -m -d /data0/coordinator/gpseg-1
20250408:10:47:07:059778 gpstop:vm181:gpadmin-[INFO]:-Gathering information and validating the environment...
20250408:10:47:07:059778 gpstop:vm181:gpadmin-[INFO]:-Obtaining CloudberryDB Coordinator catalog information
20250408:10:47:07:059778 gpstop:vm181:gpadmin-[INFO]:-Obtaining Segment details from coordinator...
20250408:10:47:07:059778 gpstop:vm181:gpadmin-[INFO]:-CloudberryDB Version: 'postgres (Cloudberry Database) 1.6.0 build 1'
20250408:10:47:07:059778 gpstop:vm181:gpadmin-[INFO]:-Commencing Coordinator instance shutdown with mode='smart'
20250408:10:47:07:059778 gpstop:vm181:gpadmin-[INFO]:-Coordinator segment instance directory=/data0/coordinator/gpseg-1
20250408:10:47:07:059778 gpstop:vm181:gpadmin-[INFO]:-Stopping coordinator segment and waiting for user connections to finish ...
server shutting down
20250408:10:47:08:059778 gpstop:vm181:gpadmin-[INFO]:-Attempting forceful termination of any leftover coordinator process
20250408:10:47:08:059778 gpstop:vm181:gpadmin-[INFO]:-Terminating processes for segment /data0/coordinator/gpseg-1
20250408:10:47:09:059805 gpstart:vm181:gpadmin-[INFO]:-Starting gpstart with args: -a -l /home/gpadmin/gpAdminLogs -d /data0/coordinator/gpseg-1
20250408:10:47:09:059805 gpstart:vm181:gpadmin-[INFO]:-Gathering information and validating the environment...
20250408:10:47:09:059805 gpstart:vm181:gpadmin-[INFO]:-CloudberryDB Binary Version: 'postgres (Cloudberry Database) 1.6.0 build 1'
20250408:10:47:09:059805 gpstart:vm181:gpadmin-[INFO]:-CloudberryDB Catalog Version: '302407192'
20250408:10:47:09:059805 gpstart:vm181:gpadmin-[INFO]:-Starting Coordinator instance in admin mode
20250408:10:47:09:059805 gpstart:vm181:gpadmin-[INFO]:-CoordinatorStart pg_ctl cmd is env GPSESSID=0000000000 GPERA=None $GPHOME/bin/pg_ctl -D /data0/coordinator/gpseg-1 -l /data0/coordinator/gpseg-1/log/startup.log -w -t 600 -o " -p 5432 -c gp_role=utility " start
20250408:10:47:09:059805 gpstart:vm181:gpadmin-[INFO]:-Obtaining CloudberryDB Coordinator catalog information
20250408:10:47:09:059805 gpstart:vm181:gpadmin-[INFO]:-Obtaining Segment details from coordinator...
20250408:10:47:09:059805 gpstart:vm181:gpadmin-[INFO]:-Setting new coordinator era
20250408:10:47:09:059805 gpstart:vm181:gpadmin-[INFO]:-Coordinator Started...
20250408:10:47:09:059805 gpstart:vm181:gpadmin-[INFO]:-Shutting down coordinator
20250408:10:47:10:059805 gpstart:vm181:gpadmin-[INFO]:-Commencing parallel segment instance startup, please wait...
.
20250408:10:47:11:059805 gpstart:vm181:gpadmin-[INFO]:-Process results...
20250408:10:47:11:059805 gpstart:vm181:gpadmin-[INFO]:-
20250408:10:47:11:059805 gpstart:vm181:gpadmin-[INFO]:-
20250408:10:47:11:059805 gpstart:vm181:gpadmin-[INFO]:-
20250408:10:47:11:059805 gpstart:vm181:gpadmin-[INFO]:-
20250408:10:47:11:059805 gpstart:vm181:gpadmin-[INFO]:-----------------------------------------------------
20250408:10:47:11:059805 gpstart:vm181:gpadmin-[INFO]:- Successful segment starts = 4
20250408:10:47:11:059805 gpstart:vm181:gpadmin-[INFO]:- Failed segment starts = 0
20250408:10:47:11:059805 gpstart:vm181:gpadmin-[INFO]:- Skipped segment starts (segments are marked down in configuration) = 0
20250408:10:47:11:059805 gpstart:vm181:gpadmin-[INFO]:-----------------------------------------------------
20250408:10:47:11:059805 gpstart:vm181:gpadmin-[INFO]:-Successfully started 4 of 4 segment instances
20250408:10:47:11:059805 gpstart:vm181:gpadmin-[INFO]:-----------------------------------------------------
20250408:10:47:11:059805 gpstart:vm181:gpadmin-[INFO]:-Starting Coordinator instance vm181 directory /data0/coordinator/gpseg-1
20250408:10:47:11:059805 gpstart:vm181:gpadmin-[INFO]:-CoordinatorStart pg_ctl cmd is env GPSESSID=0000000000 GPERA=76a3c4e31c2dd425_250408104709 $GPHOME/bin/pg_ctl -D /data0/coordinator/gpseg-1 -l /data0/coordinator/gpseg-1/log/startup.log -w -t 600 -o " -p 5432 -c gp_role=dispatch " start
20250408:10:47:11:059805 gpstart:vm181:gpadmin-[INFO]:-Command pg_ctl reports Coordinator vm181 instance active
20250408:10:47:11:059805 gpstart:vm181:gpadmin-[INFO]:-Connecting to db template1 on host localhost
20250408:10:47:11:059805 gpstart:vm181:gpadmin-[INFO]:-No standby coordinator configured. skipping...
20250408:10:47:11:059805 gpstart:vm181:gpadmin-[INFO]:-Database successfully started
20250408:10:47:11:053136 gpinitsystem:vm181:gpadmin-[INFO]:-Completed restart of CloudberryDB instance in production mode
20250408:10:47:11:053136 gpinitsystem:vm181:gpadmin-[INFO]:-Creating core GPDB extensions
20250408:10:47:20:053136 gpinitsystem:vm181:gpadmin-[INFO]:-Commencing parallel build of mirror segment instances
20250408:10:47:20:053136 gpinitsystem:vm181:gpadmin-[INFO]:-Spawning parallel processes batch [1], please wait...
....
20250408:10:47:20:053136 gpinitsystem:vm181:gpadmin-[INFO]:-Waiting for parallel processes batch [1], please wait...
...........................................................
20250408:10:48:21:053136 gpinitsystem:vm181:gpadmin-[INFO]:------------------------------------------------
20250408:10:48:21:053136 gpinitsystem:vm181:gpadmin-[INFO]:-Parallel process exit status
20250408:10:48:21:053136 gpinitsystem:vm181:gpadmin-[INFO]:------------------------------------------------
20250408:10:48:21:053136 gpinitsystem:vm181:gpadmin-[INFO]:-Total processes marked as completed = 4
20250408:10:48:21:053136 gpinitsystem:vm181:gpadmin-[INFO]:-Total processes marked as killed = 0
20250408:10:48:21:053136 gpinitsystem:vm181:gpadmin-[INFO]:-Total processes marked as failed = 0
20250408:10:48:21:053136 gpinitsystem:vm181:gpadmin-[INFO]:------------------------------------------------
20250408:10:48:23:053136 gpinitsystem:vm181:gpadmin-[WARN]:
20250408:10:48:23:053136 gpinitsystem:vm181:gpadmin-[WARN]:-Failed to start CloudberryDB instance; please review gpinitsystem log to determine failure.
20250408:10:48:23:053136 gpinitsystem:vm181:gpadmin-[INFO]:-Scanning utility log file for any warning messages
20250408:10:48:23:053136 gpinitsystem:vm181:gpadmin-[WARN]:-*******************************************************
20250408:10:48:23:053136 gpinitsystem:vm181:gpadmin-[WARN]:-Scan of log file indicates that some warnings or errors
20250408:10:48:23:053136 gpinitsystem:vm181:gpadmin-[WARN]:-were generated during the array creation
20250408:10:48:23:053136 gpinitsystem:vm181:gpadmin-[INFO]:-Please review contents of log file
20250408:10:48:23:053136 gpinitsystem:vm181:gpadmin-[INFO]:-/home/gpadmin/gpAdminLogs/gpinitsystem_20250408.log
20250408:10:48:23:053136 gpinitsystem:vm181:gpadmin-[INFO]:-To determine level of criticality
20250408:10:48:23:053136 gpinitsystem:vm181:gpadmin-[INFO]:-These messages could be from a previous run of the utility
20250408:10:48:23:053136 gpinitsystem:vm181:gpadmin-[INFO]:-that was called today!
20250408:10:48:23:053136 gpinitsystem:vm181:gpadmin-[WARN]:-*******************************************************
20250408:10:48:23:053136 gpinitsystem:vm181:gpadmin-[INFO]:-Cloudberry Database instance successfully created
20250408:10:48:23:053136 gpinitsystem:vm181:gpadmin-[INFO]:-------------------------------------------------------
20250408:10:48:23:053136 gpinitsystem:vm181:gpadmin-[INFO]:-To complete the environment configuration, please
20250408:10:48:23:053136 gpinitsystem:vm181:gpadmin-[INFO]:-update gpadmin .bashrc file with the following
20250408:10:48:23:053136 gpinitsystem:vm181:gpadmin-[INFO]:-1. Ensure that the greenplum_path.sh file is sourced
20250408:10:48:23:053136 gpinitsystem:vm181:gpadmin-[INFO]:-2. Add "export COORDINATOR_DATA_DIRECTORY=/data0/coordinator/gpseg-1"
20250408:10:48:23:053136 gpinitsystem:vm181:gpadmin-[INFO]:- to access the CloudberryDB scripts for this instance:
20250408:10:48:23:053136 gpinitsystem:vm181:gpadmin-[INFO]:- or, use -d /data0/coordinator/gpseg-1 option for the CloudberryDB scripts
20250408:10:48:23:053136 gpinitsystem:vm181:gpadmin-[INFO]:- Example gpstate -d /data0/coordinator/gpseg-1
20250408:10:48:23:053136 gpinitsystem:vm181:gpadmin-[INFO]:-Script log file = /home/gpadmin/gpAdminLogs/gpinitsystem_20250408.log
20250408:10:48:23:053136 gpinitsystem:vm181:gpadmin-[INFO]:-To remove instance, run gpdeletesystem utility
20250408:10:48:23:053136 gpinitsystem:vm181:gpadmin-[INFO]:-To initialize a Standby Coordinator Segment for this CloudberryDB instance
20250408:10:48:23:053136 gpinitsystem:vm181:gpadmin-[INFO]:-Review options for gpinitstandby
20250408:10:48:23:053136 gpinitsystem:vm181:gpadmin-[INFO]:-------------------------------------------------------
20250408:10:48:23:053136 gpinitsystem:vm181:gpadmin-[INFO]:-The Coordinator /data0/coordinator/gpseg-1/pg_hba.conf post gpinitsystem
20250408:10:48:24:053136 gpinitsystem:vm181:gpadmin-[INFO]:-has been configured to allow all hosts within this new
20250408:10:48:24:053136 gpinitsystem:vm181:gpadmin-[INFO]:-array to intercommunicate. Any hosts external to this
20250408:10:48:24:053136 gpinitsystem:vm181:gpadmin-[INFO]:-new array must be explicitly added to this file
20250408:10:48:24:053136 gpinitsystem:vm181:gpadmin-[INFO]:-------------------------------------------------------
[gpadmin@vm181 ~]$
查看集群状态 gpstate
[gpadmin@vm181 ~]$ gpstate
20250408:10:51:17:061988 gpstate:vm181:gpadmin-[INFO]:-Starting gpstate with args:
20250408:10:51:17:061988 gpstate:vm181:gpadmin-[INFO]:-local CloudberryDB Version: 'postgres (Cloudberry Database) 1.6.0 build 1'
20250408:10:51:17:061988 gpstate:vm181:gpadmin-[INFO]:-coordinator CloudberryDB Version: 'PostgreSQL 14.4 (Cloudberry Database 1.6.0 build 1) on x86_64-pc-linux-gnu, compiled by gcc (GCC) 8.5.0 20210514 (Red Hat 8.5.0-22), 64-bit compiled on Sep 3 2024 07:22:14'
20250408:10:51:17:061988 gpstate:vm181:gpadmin-[INFO]:-Obtaining Segment details from coordinator...
20250408:10:51:17:061988 gpstate:vm181:gpadmin-[INFO]:-Gathering data from segments...
.
20250408:10:51:19:061988 gpstate:vm181:gpadmin-[INFO]:-CloudberryDB instance status summary
20250408:10:51:19:061988 gpstate:vm181:gpadmin-[INFO]:-----------------------------------------------------
20250408:10:51:19:061988 gpstate:vm181:gpadmin-[INFO]:- Coordinator instance = Active
20250408:10:51:19:061988 gpstate:vm181:gpadmin-[INFO]:- Coordinator standby = No coordinator standby configured
20250408:10:51:19:061988 gpstate:vm181:gpadmin-[INFO]:- Total segment instance count from metadata = 8
20250408:10:51:19:061988 gpstate:vm181:gpadmin-[INFO]:-----------------------------------------------------
20250408:10:51:19:061988 gpstate:vm181:gpadmin-[INFO]:- Primary Segment Status
20250408:10:51:19:061988 gpstate:vm181:gpadmin-[INFO]:-----------------------------------------------------
20250408:10:51:19:061988 gpstate:vm181:gpadmin-[INFO]:- Total primary segments = 4
20250408:10:51:19:061988 gpstate:vm181:gpadmin-[INFO]:- Total primary segment valid (at coordinator) = 4
20250408:10:51:19:061988 gpstate:vm181:gpadmin-[INFO]:- Total primary segment failures (at coordinator) = 0
20250408:10:51:19:061988 gpstate:vm181:gpadmin-[INFO]:- Total number of postmaster.pid files missing = 0
20250408:10:51:19:061988 gpstate:vm181:gpadmin-[INFO]:- Total number of postmaster.pid files found = 4
20250408:10:51:19:061988 gpstate:vm181:gpadmin-[INFO]:- Total number of postmaster.pid PIDs missing = 0
20250408:10:51:19:061988 gpstate:vm181:gpadmin-[INFO]:- Total number of postmaster.pid PIDs found = 4
20250408:10:51:19:061988 gpstate:vm181:gpadmin-[INFO]:- Total number of /tmp lock files missing = 0
20250408:10:51:19:061988 gpstate:vm181:gpadmin-[INFO]:- Total number of /tmp lock files found = 4
20250408:10:51:19:061988 gpstate:vm181:gpadmin-[INFO]:- Total number postmaster processes missing = 0
20250408:10:51:19:061988 gpstate:vm181:gpadmin-[INFO]:- Total number postmaster processes found = 4
20250408:10:51:19:061988 gpstate:vm181:gpadmin-[INFO]:-----------------------------------------------------
20250408:10:51:19:061988 gpstate:vm181:gpadmin-[INFO]:- Mirror Segment Status
20250408:10:51:19:061988 gpstate:vm181:gpadmin-[INFO]:-----------------------------------------------------
20250408:10:51:19:061988 gpstate:vm181:gpadmin-[INFO]:- Total mirror segments = 4
20250408:10:51:19:061988 gpstate:vm181:gpadmin-[INFO]:- Total mirror segment valid (at coordinator) = 4
20250408:10:51:19:061988 gpstate:vm181:gpadmin-[INFO]:- Total mirror segment failures (at coordinator) = 0
20250408:10:51:19:061988 gpstate:vm181:gpadmin-[INFO]:- Total number of postmaster.pid files missing = 0
20250408:10:51:19:061988 gpstate:vm181:gpadmin-[INFO]:- Total number of postmaster.pid files found = 4
20250408:10:51:19:061988 gpstate:vm181:gpadmin-[INFO]:- Total number of postmaster.pid PIDs missing = 0
20250408:10:51:19:061988 gpstate:vm181:gpadmin-[INFO]:- Total number of postmaster.pid PIDs found = 4
20250408:10:51:19:061988 gpstate:vm181:gpadmin-[INFO]:- Total number of /tmp lock files missing = 0
20250408:10:51:19:061988 gpstate:vm181:gpadmin-[INFO]:- Total number of /tmp lock files found = 4
20250408:10:51:19:061988 gpstate:vm181:gpadmin-[INFO]:- Total number postmaster processes missing = 0
20250408:10:51:19:061988 gpstate:vm181:gpadmin-[INFO]:- Total number postmaster processes found = 4
20250408:10:51:19:061988 gpstate:vm181:gpadmin-[INFO]:- Total number mirror segments acting as primary segments = 0
20250408:10:51:19:061988 gpstate:vm181:gpadmin-[INFO]:- Total number mirror segments acting as mirror segments = 4
20250408:10:51:19:061988 gpstate:vm181:gpadmin-[INFO]:-----------------------------------------------------
[gpadmin@vm181 ~]$
登录数据据
[gpadmin@vm181 ~]$ psql warehouse
psql (14.4, server 14.4)
Type "help" for help.
warehouse=# SELECT * FROM gp_segment_configuration;
dbid | content | role | preferred_role | mode | status | port | hostname | address | datadir | warehouseid
------+---------+------+----------------+------+--------+------+----------+---------+----------------------------+-------------
1 | -1 | p | p | n | u | 5432 | vm181 | vm181 | /data0/coordinator/gpseg-1 | 0
3 | 1 | p | p | s | u | 6000 | vm182 | vm182 | /data0/primary/gpseg1 | 0
7 | 1 | m | m | s | u | 7000 | vm183 | vm183 | /data0/mirror/gpseg1 | 0
4 | 2 | p | p | s | u | 6000 | vm183 | vm183 | /data0/primary/gpseg2 | 0
8 | 2 | m | m | s | u | 7000 | vm184 | vm184 | /data0/mirror/gpseg2 | 0
10 | -1 | m | m | s | u | 5432 | vm182 | vm182 | /data0/coordinator/gpseg-1 | 0
9 | 3 | p | m | s | u | 7000 | vm181 | vm181 | /data0/mirror/gpseg3 | 0
5 | 3 | m | p | s | u | 6000 | vm184 | vm184 | /data0/primary/gpseg3 | 0
6 | 0 | p | m | s | u | 7000 | vm182 | vm182 | /data0/mirror/gpseg0 | 0
2 | 0 | m | p | s | u | 6000 | vm181 | vm181 | /data0/primary/gpseg0 | 0
(10 rows)
warehouse=#