Sid*_*tha 18
SSH 进入您希望保留的 glusterfs 机器并执行以下操作:
[siddhartha@glusterfs-01-perf ~]$ sudo gluster peer status
Number of Peers: 1
Hostname: 10.240.0.123
Port: 24007
Uuid: 03747753-a2cc-47dc-8989-62203a7d31cd
State: Peer in Cluster (Connected)
Run Code Online (Sandbox Code Playgroud)
这向我们展示了我们希望摆脱的另一个同行。
要分离它,请尝试:
sudo gluster peer detach 10.240.0.123
Run Code Online (Sandbox Code Playgroud)
你可能会失败:
peer detach: failed: Brick(s) with the peer 10.240.0.123 exist in cluster
Run Code Online (Sandbox Code Playgroud)
我们需要先摆脱砖块:
[siddhartha@glusterfs-01-perf ~]$ sudo gluster volume info
Volume Name: glusterfs
Type: Replicate
Volume ID: 563f8593-4592-430f-9f0b-c9472c12570b
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: 10.240.0.122:/mnt/storage/glusterfs
Brick2: 10.240.0.123:/mnt/storage/glusterfs
Run Code Online (Sandbox Code Playgroud)
要删除 Brick2,请执行以下操作:
[siddhartha@glusterfs-01-perf ~]$ sudo gluster volume remove-brick glusterfs 10.240.0.123:/mnt/storage/glusterfs
Run Code Online (Sandbox Code Playgroud)
这可能会失败:
Removing brick(s) can result in data loss. Do you want to Continue? (y/n) y
volume remove-brick commit force: failed: Removing bricks from replicate configuration is not allowed without reducing replica count explicitly.
Run Code Online (Sandbox Code Playgroud)
我们的复制设置为 2,并且需要显式减少为 1,因此replica 1
在前面的命令中添加一个标志:
[siddhartha@glusterfs-01-perf ~]$ sudo gluster volume remove-brick glusterfs replica 1 10.240.0.123:/mnt/storage/glusterfs
Removing brick(s) can result in data loss. Do you want to Continue? (y/n) y
volume remove-brick commit force: success
Run Code Online (Sandbox Code Playgroud)
这应该可以解决问题:
[siddhartha@glusterfs-01-perf ~]$ sudo gluster volume info glusterfs
Volume Name: glusterfs
Type: Distribute
Volume ID: 563f8593-4592-430f-9f0b-c9472c12570b
Status: Started
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: 10.240.0.122:/mnt/storage/glusterfs
Run Code Online (Sandbox Code Playgroud)
你可以去终止另一台机器。
归档时间: |
|
查看次数: |
23420 次 |
最近记录: |