标签 分布式文件系统 下的文章

mfs分布式文件系统-(测试篇)

安装篇请看:mfs分布式文件系统-(安装篇)

配置篇请看:mfs分布式文件系统-(配置篇)
mfs测试,接上文

一.删除,回收站测试

1.设置删除文件后空间回收时间
默认回收时间为1天,86400秒

[root@mfsclient data]# mfssettrashtime 86400 /data/mfs
/data/mfs: 86400

查看设置

[root@mfsclient data]# mfsgettrashtime /data/mfs
/data/mfs: 86400

2.trash
删除的文件可以通过trash找回,安装mfsclient后,可以通过 -m 参数来挂载mfsmeta文件系统来查看trash

[root@mfsclient /]# mfsmount /data/mfs -m -H mfsmaster
mfsmaster accepted connection with parameters: read-write,restricted_ip
[root@mfsclient /]# cd /data/mfs
[root@mfsclient mfs]# ls
sustained  trash

所有在mfs中被删除的文件都会在trash中,所以要找回时,使用-m参数挂载即可
sustained 是正在被读取的文件,等读取结束后会被删除到trash中
测试trash

[root@mfsclient mfs]# touch insoz.com/index.html
[root@mfsclient mfs]# ls
insoz.com
[root@mfsclient mfs]# rm -rf insoz.com
trash中
[root@mfstrash trash]# ls
00000028|insoz.com  0000002B|insoz.com|index.html  undel
[root@mfstrash trash]#

文件名由一个八位十六进制的数i-node和被删文件的文件名组成,在文件名和i-node之间用|隔开
将这些文件移动到undel目录下,将会恢复原始文件到正确的MooseFS文件系统的路径下

mv 00000030\|insoz.com\|index.html undel/

二.破坏性测试

1.把环境中的数据存储服务器依次停止,剩下一个,整个mfs还能继续提供服务。
然后上传一个文件,把文件副本更改为3,接着依次启动刚被关闭的另外两台数据存储服务器,再关闭刚才没有关闭的那台数据存储服务器,最后验证最后上传的那个文件是否可以正常访问,如果可以,证明文件被同步到多台数据存储服务器了。

三.元数据服务器测试

1.模拟元数据服务器进程被意外结束,执行回复操作
停止元数据服务器
[root@mfsmaster trash]# ps -ef|grep mfsmaster
root 21269 5485 0 13:48 pts/4 00:00:00 grep mfsmaster
mfs 26880 1 0 Jun02 ? 00:10:47 mfsmaster -a
[root@mfsmaster trash]# kill -9 26880
启动元数据服务器

[root@mfsmaster trash]# mfsmaster start
open files limit has been set to: 4096
working directory: /var/lib/mfs
lockfile created and locked
initializing mfsmaster modules ...
exports file has been loaded
topology file has been loaded
loading metadata ...
can't find metadata.mfs - try using option '-a'
init: metadata manager failed !!!
error occured during initialization - exiting

提示初始化数据失败
执行恢复操作

[root@mfsmaster trash]# mfsmaster -a
open files limit has been set to: 4096
working directory: /var/lib/mfs
lockfile created and locked
initializing mfsmaster modules ...
exports file has been loaded
topology file has been loaded
loading metadata ...
loading sessions data ... ok (0.0000)
loading objects (files,directories,etc.) ... ok (0.0354)
loading names ... ok (0.0354)
loading deletion timestamps ... ok (0.0000)
loading quota definitions ... ok (0.0000)
loading xattr data ... ok (0.0000)
loading posix_acl data ... ok (0.0000)
loading open files data ... ok (0.0000)
loading chunkservers data ... ok (0.0000)
loading chunks data ... ok (0.0000)
checking filesystem consistency ... ok
connecting files and chunks ... ok
all inodes: 4
directory inodes: 2
file inodes: 2
chunks: 0
metadata file has been loaded
stats file has been loaded
master < -> metaloggers module: listen on *:9419
master < -> chunkservers module: listen on *:9420
main master server module: listen on *:9421
mfsmaster daemon initialized properly

启动元数据服务器

[root@mfsmaster lib]# mfsmaster start
open files limit has been set to: 4096
working directory: /var/lib/mfs
lockfile created and locked
initializing mfsmaster modules ...
exports file has been loaded
topology file has been loaded
loading metadata ...
loading sessions data ... ok (0.0000)
loading objects (files,directories,etc.) ... ok (0.0354)
loading names ... ok (0.0354)
loading deletion timestamps ... ok (0.0000)
loading quota definitions ... ok (0.0000)
loading xattr data ... ok (0.0000)
loading posix_acl data ... ok (0.0000)
loading open files data ... ok (0.0000)
loading chunkservers data ... ok (0.0000)
loading chunks data ... ok (0.0000)
checking filesystem consistency ... ok
connecting files and chunks ... ok
all inodes: 7
directory inodes: 5
file inodes: 2
chunks: 0
metadata file has been loaded
stats file has been loaded
master < -> metaloggers module: listen on *:9419
master < -> chunkservers module: listen on *:9420
main master server module: listen on *:9421
mfsmaster daemon initialized properly

不出意外客户端会自动恢复挂载信息,并且数据正常
2.模拟进程被意外关闭,并且日志文件被损毁
kill -9 杀掉mfsmaster进程
删除mfs目录,模拟故障,启动元数据服务器,提示初始化数据失败

[root@mfsmaster trash]# mfsmaster start
open files limit has been set to: 4096
working directory: /var/lib/mfs
lockfile created and locked
initializing mfsmaster modules ...
exports file has been loaded
topology file has been loaded
loading metadata ...
can't find metadata.mfs - try using option '-a'
init: metadata manager failed !!!
error occured during initialization - exiting

从元数据日志服务器把备份文件恢复过来
把所有文件名字中的_ml去掉
mv changelog_ml.0.mfs changelog.0.mfs
mv changelog_ml.2.mfs changelog.2.mfs
mv changelog_ml.1.mfs changelog.1.mfs
mv metadata_ml.mfs.back metadata.mfs.back
执行恢复操作

[root@mfsmaster trash]# mfsmaster -a
open files limit has been set to: 4096
working directory: /var/lib/mfs
lockfile created and locked
initializing mfsmaster modules ...
exports file has been loaded
topology file has been loaded
loading metadata ...
loading sessions data ... ok (0.0000)
loading objects (files,directories,etc.) ... ok (0.0354)
loading names ... ok (0.0354)
loading deletion timestamps ... ok (0.0000)
loading quota definitions ... ok (0.0000)
loading xattr data ... ok (0.0000)
loading posix_acl data ... ok (0.0000)
loading open files data ... ok (0.0000)
loading chunkservers data ... ok (0.0000)
loading chunks data ... ok (0.0000)
checking filesystem consistency ... ok
connecting files and chunks ... ok
all inodes: 4
directory inodes: 2
file inodes: 2
chunks: 0
metadata file has been loaded
stats file has been loaded
master < -> metaloggers module: listen on *:9419
master < -> chunkservers module: listen on *:9420
main master server module: listen on *:9421
mfsmaster daemon initialized properly

需要注意的是,metadata.mfs.back 必须与日志文件在一起才能正常恢复
启动元数据服务器

mfsmaster start

客户端挂载,数据正常

mfs分布式文件系统-(配置篇)

安装篇请看:mfs分布式文件系统-(安装篇)
mfs配置,接上文
首先使用hosts解析主机名,如果想启用master高可用和避免单点故障,可以搭建DNS服务器解析主机名

echo "192.168.1.1 mfsmaster" >> /etc/hosts

一.master服务器
配置文件

/etc/mfs/mfsmaster.cfg

工作目录

/var/lib/mfs/

默认配置文件,虽然每行使用的都被注释了.但它们都是默认值,如果要更改,取消注释修改保存即可

# WORKING_USER = mfs
# WORKING_GROUP = mfs
# SYSLOG_IDENT = mfsmaster
# LOCK_MEMORY = 0
# NICE_LEVEL = -19
# FILE_UMASK = 027
# DATA_PATH = /var/lib/mfs #数据存放路劲,指元数据的存放路径
# EXPORTS_FILENAME = /etc/mfs/mfsexports.cfg #权限控制文件
# TOPOLOGY_FILENAME = /etc/mfs/mfstopology.cfg
# BACK_LOGS = 50
# BACK_META_KEEP_PREVIOUS = 1
# CHANGELOG_PRESERVE_SECONDS = 1800
# MISSING_LOG_CAPACITY = 100000
# MATOML_LISTEN_HOST = *
# MATOML_LISTEN_PORT = 9419 #用于备份元数据服务器的变化日志.
# MATOCS_LISTEN_HOST = *
# MATOCS_LISTEN_PORT = 9420 #元数据服务器使用9420端口来接受chunkserver的连接
# MATOCS_TIMEOUT = 10
# REPLICATIONS_DELAY_INIT = 300
# CHUNKS_LOOP_MAX_CPS = 100000
# CHUNKS_LOOP_MIN_TIME = 300
# CHUNKS_SOFT_DEL_LIMIT = 10
# CHUNKS_HARD_DEL_LIMIT = 25
# CHUNKS_WRITE_REP_LIMIT = 2,1,1,4
# CHUNKS_READ_REP_LIMIT = 10,5,2,5
# CS_HEAVY_LOAD_THRESHOLD = 100
# CS_HEAVY_LOAD_RATIO_THRESHOLD = 5.0
# CS_HEAVY_LOAD_GRACE_PERIOD = 900
# ACCEPTABLE_PERCENTAGE_DIFFERENCE = 1.0
# PRIORITY_QUEUES_LENGTH = 1000000
# MATOCL_LISTEN_HOST = *
# MATOCL_LISTEN_PORT = 9421 #元数据服务器通过坚挺9421端口来接受MFS远程mfsmount信息
# SESSION_SUSTAIN_TIME = 86400
# QUOTA_TIME_LIMIT = 604800

二.metalogger服务器
配置文件

/etc/mfsmetalogger.cfg

只需要修改一个地方即可
#MASTER_HOST mfsmaster
DATA_PATH=/var/lib/mfs #从元数据服务器取回文件的保存路劲
BACK_LOGS=50 存放备份日志的总个数位50
META_DOWNLOAD_FREQ=24 元数据备份文件下载请求频率,默认为24小时,即每隔一天从元数据服务器(MASTER)下载一个metadata.mfs.back文件。当元数据服务器关闭或者出故障时,matedata.mfs.back文件将消失,那么要恢复整个mfs,则需从metalogger服务器取得该文件。请特别注意这个文件,它与日志文件一起,才能够恢复整个被损坏的分布式文件系统.

三.chunkserver服务器
配置文件

/etc/mfschunkserver.cfg
/etc/mfshdd.cfg

数据存储服务器有两个地方需要修改,一个是mfschunkserver.cfg,另一个是/etc/mfshdd.cfg,每个服务器用来分配给mfs使用的空间最好是一个单独的硬盘或一个raid卷,最低要求是一个分区
1.修改/etc/mfschunkserver.cfg

MASTER_HOST=mfsmaster 元数据服务器的主机名.能够访问到就行
HDD_CONF_FILENAME=/etc/mfs/mfshdd.cfg 分配磁盘空间的配置文件

2.修改/etc/mfshdd.cfg

echo "/data/mfs 2.0TiB" >> /etc/mfs/mfshdd.cfg
chown mfs.mfs -R /data/mfs 给分配目录mfs权限

四.mfs客户端
安转了mfs-client后,直接挂载使用即可

mkdir /data/mfs
mfsmount /data/mfsdata -H mfsmaster

使用df -h 查看挂载信息

[root@mfsclient ~]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/sda2              30G  6.1G   22G  22% /
tmpfs                 3.9G     0  3.9G   0% /dev/shm
/dev/sda1            1008M   53M  904M   6% /boot
/dev/sda6             336G   71G  249G  23% /data
/dev/sda3              20G  172M   19G   1% /home
mfsmaster:9421        2.5T   32G  2.5T   2% /data/mfs

设置文件副本数目

mfssetgoal 3 /data/mfs

检查设置

[root@mfsclient mfs]# touch insoz.com
[root@mfsclient mfs]# mfsgetgoal insoz.com
insoz.com: 3

查看实际拷贝

[root@mfsclient mfs]# mfsfileinfo insoz.com
insoz.com:
        chunk 0: 0000000000000208_00000001 / (id:520 ver:1)
                copy 1: 192.168.1.2:9422
                copy 2: 192.168.1.3:9422

五.设置开机启动,挂载

master服务器
echo "mfsmaster start" >> /etc/rc.local
chunkserver服务器
echo "mfschunkserver start">> /etc/rc.local
metalogger服务器
echo "mfsmetalogger start" >> /etc/rc.local
client服务器
echo "mfsmount /data/mfs -H mfsmaster" >> /etc/rc.local

安装篇请看:mfs分布式文件系统-(安装篇)

mfs分布式文件系统-(安装篇)

配置请看:mfs分布式文件系统-(配置篇)
一.mfs优势
1.通用文件系统.可以直接挂载使用.
2.可以在线扩容,架构扩展性强
3.部署简单(yum)
4.文件对象高可用,可设置任意数量的文件拷贝,而且会加速读写性能
5.提供回收站功能
7.提供web gui监控接口
8.多个master,解决了单点故障问题(2.0版本以上)

二.官方架构图
mfs1

mfs2
三.mfs文件系统结构
四种角色:
1.mfsmaster :负责各个数据存储服务器的管理,文件读写调度,文件空间回收以及恢复.多节点拷贝
2.mfsmetalogger :负责备份master服务器的变化日志文件,文件类型为changelog_ml.*.mfs,以便于在master server出问题的时候接替其进行工作
3.mfschunkserver:负责连接管理服务器,听从管理服务器调度,提供存储空间,并为客户提供数据传输.
4.mfsclient :通过fuse内核接口挂接远程管理服务器上所管理的数据存储服务器,.看起来共享的文件系统和本地unix文件系统使用一样的效果.
四.环境
os CentOS6.5 x64
master 1台(采用DNS服务器来使用轮询负载,本文暂无,后续测试)
Metaloggers 1台
chunkserver 2台
client 2台
五.安装
1.首先安装appropriate key
包管理

curl "http://ppa.moosefs.com/RPM-GPG-KEY-MooseFS" > /etc/pki/rpm-gpg/RPM-GPG-KEY-MooseFS

For sysv os family-CentOS6

curl "http://ppa.moosefs.com/MooseFS-stable-el6.repo" > /etc/yum.repos.d/MooseFS.repo

For sysv os family-CentOS7

curl "http://ppa.moosefs.com/MooseFS-stable-rhsystemd.repo" > /etc/yum.repos.d/MooseFS.repo

2.Master Server 安装在master服务器

yum install moosefs-master
yum install moosefs-cli

启动

mfsmaster start
service mfsmaster start

3.Chunkservers 安装在两台chunkserver服务器(可以动态扩展)

yum install moosefs-chunkserver

启动

mfschunkserver start
service moosefs-chunkserver start

4.Metaloggers 安装在metaloggers服务器,建议不要与master安装在一起

yum install moosefs-metalogger

启动

mfs-metalogger start
server moosefs-metalogger start

5.Moosefs CGI 和 moosefs-cgiserv安装

yum install moosefs-cgi moosefs-cgiserv -y

启动

mfscgiserv start
service mfscgiserv start

访问

http://129.168.1.1:9425

6.Clients 安装在需要挂载文件系统的服务器中

yum install moosefs-client

7.将安装的服务都设置开机启动

chkconfig moosefs-master on
chkconfig moosefs-metalogger on
chkconfig moosefs-cgiserv on
chkconfig moosefs-chunkserver on

确认安装

[root@ralsun160 /]# netstat -antlp|grep mfs
tcp        0      0 0.0.0.0:9419                0.0.0.0:*                   LISTEN      16896/mfsmaster
tcp        0      0 0.0.0.0:9420                0.0.0.0:*                   LISTEN      16896/mfsmaster
tcp        0      0 0.0.0.0:9421                0.0.0.0:*                   LISTEN      16896/mfsmaster
tcp        0      0 192.168.1.1:9420         	192.168.1.2:40998           ESTABLISHED 16896/mfsmaster
tcp        0      0 192.168.1.1:9419        	192.168.1.1:50880           ESTABLISHED 16896/mfsmaster
tcp        0      0 192.168.1.1:9421        	192.168.1.3:37691           ESTABLISHED 16896/mfsmaster
tcp        0      0 192.168.1.1:9421         	192.168.1.2:57989           ESTABLISHED 16896/mfsmaster
tcp        0      0 192.168.1.1:9420         	192.168.1.2:51840           ESTABLISHED 16896/mfsmaster
tcp        0      0 192.168.1.1:9421         	192.168.1.3:55082           ESTABLISHED 16896/mfsmaster
tcp        0      0 192.168.1.1:50880       	192.168.1.5:9419            ESTABLISHED 17035/mfsmetalogger

配置请看:mfs分布式文件系统-(配置篇)