2015年六月月 发布的文章

apache,nginx,php隐藏版本号

当黑客入侵一台服务器时,首先会”踩点”, 这里的”踩点”,指的是了解服务器中运行的一些服务的详细情况,比如说:版本号,当黑客知道相应服务的版本号后,就可以寻找该服务相应版本的一些漏洞来入侵,攻击,所以我们需要隐藏这些版本号来避免一些不必要的问题

我们来测试一下

insoz:~ insoz$ curl -I http://127.0.0.1/phpinfo.php
HTTP/1.1 200 OK
Server: nginx/1.5.0
Date: Thu, 18 Jun 2015 02:39:32 GMT
Content-Type: text/html
Connection: keep-alive
Vary: Accept-Encoding
X-Powered-By: PHP/5.3.1

可以看到我们的服务器nginx和php版本都暴露了. 下面我们来看隐藏的方法

首先来看nginx中隐藏版本号的方法:
在nginx配置文件nginx.conf中,加入以下代码

server_tokens off;

apache中隐藏版本号的方法:
在apache配置文件httpd.conf中,加入以下代码

ServerTokens Prod
ServerSignature Off

再来看php中隐藏版本号的方法:
在php配置文件php.ini中,加入以下代码

expose_php = Off

好了,修改完毕重启服务,我们再来测试一下:

insoz:~ insoz$ curl -I http://127.0.0.1//phpinfo.php
HTTP/1.1 200 OK
Server: nginx
Date: Thu, 18 Jun 2015 02:41:47 GMT
Content-Type: text/html
Connection: keep-alive
Vary: Accept-Encoding

Linux进程管理:Supervisor

一.介绍

Supervisord是用Python实现的一款非常实用的进程管理工具。supervisord会帮你把管理的应用程序转成daemon程序,而且可以方便的通过命令开启、关闭、重启等操作,而且它管理的进程一旦崩溃会自动重启,这样就可以保证程序执行中断后的情况下有自我修复的功能。

二.安装配置

supervisor的安装非常简单

1.替换yum源

这里我采用yum来安装,首先,需要替换成阿里云的yum源

mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-6.repo
mv /etc/yum.repos.d/epel.repo /etc/yum.repos.d/epel.repo.backup
mv /etc/yum.repos.d/epel-testing.repo /etc/yum.repos.d/epel-testing.repo.backup
wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-6.repo
yum makecache

2.安装相关组件及supervisor

yum install python-setuptools python-setuptools-devel supervisord -y

相关目录

配置文件:/etc/supervisord.conf
进程配置文件:/etc/supervisord/*.conf, 每个进程一个配置文件,根据服务器相关软件修改路径

服务管理

service supervisord start
service supervisord stop
service supervisord restart

修改配置文件
[官方文档]

mkdir /etc/supervisord/   #创建进程管理目录
echo "[include]" >> /etc/supervisord.conf
echo "files = /etc/supervisord/*.conf" >> /etc/supervisord.conf
添加web管理界面
echo "
[inet_http_server]         ; inet (TCP) server disabled by default
port=0.0.0.0:9001        ; (ip_address:port specifier, *:port for all iface)
username=admin              ; (default is no username (open server))
password=123456" >> /etc/supervisord.conf
http://192.168.1.1:9001

三.添加管理进程

cat > /etc/supervisord/gearmand.conf < < EOF
[program:gearmand]
command=/usr/local/sbin/gearmand
priority=1
numprocs=1
autostart=true
autorestar=true
startretries=10
stopsignal=KILL
stopwaitsecs=10
redirect_stderr=true
stdout_logfile=/etc/supervisord/gearmand.log
EOF

[官方文档]

mfs分布式文件系统-(测试篇)

安装篇请看:mfs分布式文件系统-(安装篇)

配置篇请看:mfs分布式文件系统-(配置篇)
mfs测试,接上文

一.删除,回收站测试

1.设置删除文件后空间回收时间
默认回收时间为1天,86400秒

[root@mfsclient data]# mfssettrashtime 86400 /data/mfs
/data/mfs: 86400

查看设置

[root@mfsclient data]# mfsgettrashtime /data/mfs
/data/mfs: 86400

2.trash
删除的文件可以通过trash找回,安装mfsclient后,可以通过 -m 参数来挂载mfsmeta文件系统来查看trash

[root@mfsclient /]# mfsmount /data/mfs -m -H mfsmaster
mfsmaster accepted connection with parameters: read-write,restricted_ip
[root@mfsclient /]# cd /data/mfs
[root@mfsclient mfs]# ls
sustained  trash

所有在mfs中被删除的文件都会在trash中,所以要找回时,使用-m参数挂载即可
sustained 是正在被读取的文件,等读取结束后会被删除到trash中
测试trash

[root@mfsclient mfs]# touch insoz.com/index.html
[root@mfsclient mfs]# ls
insoz.com
[root@mfsclient mfs]# rm -rf insoz.com
trash中
[root@mfstrash trash]# ls
00000028|insoz.com  0000002B|insoz.com|index.html  undel
[root@mfstrash trash]#

文件名由一个八位十六进制的数i-node和被删文件的文件名组成,在文件名和i-node之间用|隔开
将这些文件移动到undel目录下,将会恢复原始文件到正确的MooseFS文件系统的路径下

mv 00000030\|insoz.com\|index.html undel/

二.破坏性测试

1.把环境中的数据存储服务器依次停止,剩下一个,整个mfs还能继续提供服务。
然后上传一个文件,把文件副本更改为3,接着依次启动刚被关闭的另外两台数据存储服务器,再关闭刚才没有关闭的那台数据存储服务器,最后验证最后上传的那个文件是否可以正常访问,如果可以,证明文件被同步到多台数据存储服务器了。

三.元数据服务器测试

1.模拟元数据服务器进程被意外结束,执行回复操作
停止元数据服务器
[root@mfsmaster trash]# ps -ef|grep mfsmaster
root 21269 5485 0 13:48 pts/4 00:00:00 grep mfsmaster
mfs 26880 1 0 Jun02 ? 00:10:47 mfsmaster -a
[root@mfsmaster trash]# kill -9 26880
启动元数据服务器

[root@mfsmaster trash]# mfsmaster start
open files limit has been set to: 4096
working directory: /var/lib/mfs
lockfile created and locked
initializing mfsmaster modules ...
exports file has been loaded
topology file has been loaded
loading metadata ...
can't find metadata.mfs - try using option '-a'
init: metadata manager failed !!!
error occured during initialization - exiting

提示初始化数据失败
执行恢复操作

[root@mfsmaster trash]# mfsmaster -a
open files limit has been set to: 4096
working directory: /var/lib/mfs
lockfile created and locked
initializing mfsmaster modules ...
exports file has been loaded
topology file has been loaded
loading metadata ...
loading sessions data ... ok (0.0000)
loading objects (files,directories,etc.) ... ok (0.0354)
loading names ... ok (0.0354)
loading deletion timestamps ... ok (0.0000)
loading quota definitions ... ok (0.0000)
loading xattr data ... ok (0.0000)
loading posix_acl data ... ok (0.0000)
loading open files data ... ok (0.0000)
loading chunkservers data ... ok (0.0000)
loading chunks data ... ok (0.0000)
checking filesystem consistency ... ok
connecting files and chunks ... ok
all inodes: 4
directory inodes: 2
file inodes: 2
chunks: 0
metadata file has been loaded
stats file has been loaded
master < -> metaloggers module: listen on *:9419
master < -> chunkservers module: listen on *:9420
main master server module: listen on *:9421
mfsmaster daemon initialized properly

启动元数据服务器

[root@mfsmaster lib]# mfsmaster start
open files limit has been set to: 4096
working directory: /var/lib/mfs
lockfile created and locked
initializing mfsmaster modules ...
exports file has been loaded
topology file has been loaded
loading metadata ...
loading sessions data ... ok (0.0000)
loading objects (files,directories,etc.) ... ok (0.0354)
loading names ... ok (0.0354)
loading deletion timestamps ... ok (0.0000)
loading quota definitions ... ok (0.0000)
loading xattr data ... ok (0.0000)
loading posix_acl data ... ok (0.0000)
loading open files data ... ok (0.0000)
loading chunkservers data ... ok (0.0000)
loading chunks data ... ok (0.0000)
checking filesystem consistency ... ok
connecting files and chunks ... ok
all inodes: 7
directory inodes: 5
file inodes: 2
chunks: 0
metadata file has been loaded
stats file has been loaded
master < -> metaloggers module: listen on *:9419
master < -> chunkservers module: listen on *:9420
main master server module: listen on *:9421
mfsmaster daemon initialized properly

不出意外客户端会自动恢复挂载信息,并且数据正常
2.模拟进程被意外关闭,并且日志文件被损毁
kill -9 杀掉mfsmaster进程
删除mfs目录,模拟故障,启动元数据服务器,提示初始化数据失败

[root@mfsmaster trash]# mfsmaster start
open files limit has been set to: 4096
working directory: /var/lib/mfs
lockfile created and locked
initializing mfsmaster modules ...
exports file has been loaded
topology file has been loaded
loading metadata ...
can't find metadata.mfs - try using option '-a'
init: metadata manager failed !!!
error occured during initialization - exiting

从元数据日志服务器把备份文件恢复过来
把所有文件名字中的_ml去掉
mv changelog_ml.0.mfs changelog.0.mfs
mv changelog_ml.2.mfs changelog.2.mfs
mv changelog_ml.1.mfs changelog.1.mfs
mv metadata_ml.mfs.back metadata.mfs.back
执行恢复操作

[root@mfsmaster trash]# mfsmaster -a
open files limit has been set to: 4096
working directory: /var/lib/mfs
lockfile created and locked
initializing mfsmaster modules ...
exports file has been loaded
topology file has been loaded
loading metadata ...
loading sessions data ... ok (0.0000)
loading objects (files,directories,etc.) ... ok (0.0354)
loading names ... ok (0.0354)
loading deletion timestamps ... ok (0.0000)
loading quota definitions ... ok (0.0000)
loading xattr data ... ok (0.0000)
loading posix_acl data ... ok (0.0000)
loading open files data ... ok (0.0000)
loading chunkservers data ... ok (0.0000)
loading chunks data ... ok (0.0000)
checking filesystem consistency ... ok
connecting files and chunks ... ok
all inodes: 4
directory inodes: 2
file inodes: 2
chunks: 0
metadata file has been loaded
stats file has been loaded
master < -> metaloggers module: listen on *:9419
master < -> chunkservers module: listen on *:9420
main master server module: listen on *:9421
mfsmaster daemon initialized properly

需要注意的是,metadata.mfs.back 必须与日志文件在一起才能正常恢复
启动元数据服务器

mfsmaster start

客户端挂载,数据正常