博客
关于我
强烈建议你试试无所不能的chatGPT,快点击我
glusterfs Self-Heal and Re-Balance Operations
阅读量:7014 次
发布时间:2019-06-28

本文共 5577 字,大约阅读时间需要 18 分钟。

hot3.png

In my previous article on ‘ ‘ was just a brief overview of the file system and its advantages describing some basic commands. It is worth mentioning about the two important features,  
Self-heal  and  
Re-balance, in this article without which explanation on  
GlusterFS  will be of no use. Let us get familiar with the terms  
Self-heal  and  
Re-balance.

What do we mean by Self-heal on replicated volumes?

This feature is available for replicated volumes. Suppose, we have a replicated volume [minimum replica count 2]. Assume that due to some failures one or more brick among the replica bricks go down for a while and user happen to delete a file from the mount point which will get affected only on the online brick.

When the offline brick comes online at a later time, it is necessary to have that file removed from this brick also i.e. a synchronization between the replica bricks called as healing must be done. Same is the case with creation/modification of files on offline bricks. GlusterFS has an inbuilt self-heal daemon to take care of these situations whenever the bricks become online.

Replicated Volume

What do we mean by Re-balance?

Consider a distributed volume with only one brick. For instance we create 10 files on the volume through mount point. Now all the files are residing on the same brick since there is only brick in the volume. On adding one more brick to the volume, we may have to re-balance the total number of files among the two bricks. If a volume is expanded or shrunk in GlusterFS, the data needs to be re-balanced among the various bricks included in the volume.

Distributed Volume

Performing Self-heal in GlusterFS

1. Create a replicated volume using the following command.

$ gluster volume create vol replica 2 192.168.1.16:/home/a 192.168.1.16:/home/b

Note: Creation of a replicated volume with bricks on the same server may raise a warning for which you have to proceed ignoring the same.

2. Start and mount the volume.

$ gluster volume start vol$ mount -t glusterfs 192.168.1.16:/vol /mnt/

3. Create a file from mount point.

$ touch /mnt/foo

4. Verify the same on two replica bricks.

$ ls /home/a/foo$ ls /home/b/foo

5. Now send one of the bricks offline by killing the corresponding glusterfs daemon using the PID got from volume status information.

$ gluster volume status vol
Sample Output
Status of volume: volGluster process					Port	Online	Pid ------------------------------------------------------------------------------ Brick 192.168.1.16:/home/a			49152	  Y	3799 Brick 192.168.1.16:/home/b			49153	  Y	3810 NFS Server on localhost				2049	  Y	3824 Self-heal Daemon on localhost			N/A	  Y	3829

Note: See the presence of self-heal daemon on the server.

$ kill 3810
$ gluster volume status vol
Sample Output
Status of volume: vol Gluster process					Port	Online	Pid ------------------------------------------------------------------------------ Brick 192.168.1.16:/home/a			49152	  Y	3799 Brick 192.168.1.16:/home/b			N/A	  N	N/A NFS Server on localhost				2049	  Y	3824 Self-heal Daemon on localhost			N/A	  Y	3829

Now the second brick is offline.

6. Delete the file foo from mount point and check the contents of the brick.

$ rm -f /mnt/foo$ ls /home/a$ ls /home/bfoo

You see foo is still there in second brick.

7. Now bring back the brick online.

$ gluster volume start force$ gluster volume status vol
Sample Output
Status of volume: vol Gluster process					Port	Online	Pid ------------------------------------------------------------------------------ Brick 192.168.1.16:/home/a			49152	  Y	3799 Brick 192.168.1.16:/home/b			49153	  Y	4110 NFS Server on localhost				2049	  Y	4122 Self-heal Daemon on localhost			N/A	  Y	4129

Now the brick is online.

8. Check the contents of bricks.

$ ls /home/a/$ ls /home/b/

File has been removed from the second brick by the self-heal daemon.

Note: In case of larger files it may take a while for the self-heal operation to be successfully done. You can check the heal status using the following command.

$ gluster volume heal vol info

Performing Re-balance in GlusterFS

1. Create a distributed volume.

$ gluster create volume distribute 192.168.1.16:/home/c

2. Start and mount the volume.

$ gluster volume start distribute$ mount -t glusterfs 192.168.1.16:/distribute /mnt/

3. Create 10 files.

$ touch /mnt/file{1..10}$ ls /mnt/file1  file10  file2  file3  file4  file5  file6  file7  file8  file9$ ls /home/cfile1  file10  file2  file3  file4  file5  file6  file7  file8  file9

4. Add another brick to volume distribute.

$ gluster volume add-brick distribute 192.168.1.16:/home/d$ ls /home/d

5. Do re-balance.

$ gluster volume rebalance distribute startvolume rebalance: distribute: success: Starting rebalance on volume distribute has been successful.

6. Check the contents.

$ ls /home/cfile1  file2  file5  file6  file8 $ ls /home/dfile10  file3  file4  file7  file9

Files has been re-balanced.

Note: You can check the re-balance status by issuing the following command.

$ gluster volume rebalance distribute status
Sample Output
Node           Rebalanced-files     size          scanned    failures    skipped   status	run time in secs ---------      -----------          ---------     --------   ---------   -------   --------     ----------------- localhost          5                0Bytes           15          0         0       completed         1.00 volume rebalance: distribute: success:

With this I plan to conclude this series on GlusterFS. Feel free to comment here with your doubts regarding the Self-heal and Re-balance features.

转载于:https://my.oschina.net/u/1429171/blog/500122

你可能感兴趣的文章
redhat下完全卸载Oracle10g
查看>>
word-break和word-wrap
查看>>
H3C防火墙的dns-map功能
查看>>
iOS 9 点击右上角退回到上一App时 屏幕会闪一下
查看>>
我的友情链接
查看>>
POSIX线程
查看>>
支付宝-APP支付
查看>>
Qt小课的代码(第一周)
查看>>
数组与指针(一)
查看>>
I/O模型分类
查看>>
智能照明控制系统
查看>>
Shiro的Demo示例
查看>>
RISC领域ARM不是唯一
查看>>
数据库容灾的最高境界
查看>>
spark命令
查看>>
mysql explain中的select tables optimized away---(二)
查看>>
安装PHP5和PHP7
查看>>
邹承鲁院士谈学术文献阅读
查看>>
我的友情链接
查看>>
我的友情链接
查看>>