你的分享就是我们的动力 ---﹥

CentOS6.3上部署Ceph

时间:2013-06-02 18:38来源:www.chengxuyuans.com 点击:

一、背景知识

搭建ceph的机器分为两种:client和非client(mds、monitor、osd)。
配置时client只需要在内核编译时选上ceph就行,而其它三种则还需要编译ceph用户态源码(下载地址:http://ceph.com/download/),另外osd还要记得安装btrfs文件系统(内核编译作为模块就行)。
内核版本参考:http://ceph.com/docs/master/install/os-recommendations/#glibc




二、机器分配

IP Roles Hostname 备注
222.31.76.209 client localhost.localdomain  
222.31.76.178 mds&monitor ceph_mds  
222.31.76.74 osd ceph_osd0  
222.31.76.67 osd ceph_osd1  
222.31.76.235 osd ceph_osd2  
操作系统:CentOS 6.3
内核版本:linux-3.8.8.tar.xz (stable2013-04-17
ceph版本:ceph-0.60.tar.gz (01-Apr-201317:42



三、编译与配置

(1) client
1. 编译最新版内核3.8.8
#make mrproper
#make menuconfig //需要ncurses-devel包:#yum install ncurses-devel。配置时记得选上ceph和btrfs。
#make all //若是多核处理器(例:4核),则可以使用#make -j8命令,以多线程方式加速构建内核
#make modules_install
#make install
编译完成后,修改/etc/grub.conf文件,reboot启动新内核。到此,client端的安装配置就已完成。



(2)mds/monitor/osd
1. 编译最新版内核3.8.8(同client)
2. 编译ceph源码
#tar -xvf ceph-0.60.tar.gz
#cd ceph-0.60
#./autogen.sh
#./configure --without-tcmalloc
若提示以下错误,说明缺少相应依赖包,安装即可:
checking whether -lpthread saves the day... yes
checking for uuid_parse in -luuid... no
configure: error:in`/cwn/ceph/ceph-0.60':
configure: error: libuuid not found
See `config.log'
for more details.
安装:#yum install libuuid-devel
checking for __res_nquery in -lresolv... yes
checking for add_key in -lkeyutils... no
configure: error:in`/cwn/ceph/ceph-0.60':
configure: error: libkeyutils not found
See `config.log'
for more details.
安装:#yum install keyutils-libs-devel
checking pkg-config is at least version 0.9.0... yes
checking for CRYPTOPP... no
checking for library containing _ZTIN8CryptoPP14CBC_EncryptionE... no
checking for NSS... no
configure: error:in`/cwn/ceph/ceph-0.60':
configure: error: no suitable crypto library found
See `config.log'
for more details.
安装(下载的rpm包):
#rpm -ivh cryptopp-5.6.2-2.el6.x86_64.rpm
#rpm -ivh cryptopp-devel-5.6.2-2.el6.x86_64.rpm
checking pkg-config is at least version 0.9.0... yes
checking for CRYPTOPP... no
checking for library containing _ZTIN8CryptoPP14CBC_EncryptionE...-lcryptopp
checking for NSS... no
configure: using cryptopp for cryptography
checking for FCGX_Init in -lfcgi... no
checking for fuse_main in -lfuse... no
configure: error:in`/cwn/ceph/ceph-0.60':
configure: error: no FUSE found (use --without-fuse to disable)
See `config.log'
for more details.
安装:#yum install fuse-devel
checking for fuse_main in -lfuse... yes
checking for fuse_getgroups... no
checking jni.h usability... no
checking jni.h presence... no
checking for jni.h... no
checking for LIBEDIT... no
configure: error:in`/cwn/ceph/ceph-0.60':
configure: error: No usable version of libedit found.
See `config.log'
for more details.
安装:#yum install libedit-devel
checking for LIBEDIT... yes
checking atomic_ops.h usability... no
checking atomic_ops.h presence... no
checking for atomic_ops.h... no
configure: error:in`/cwn/ceph/ceph-0.60':
configure: error: no libatomic-ops found (use --without-libatomic-ops to disable)
See `config.log'
for more details.
安装:#yum install libatomic_ops-devel (也可按提示,使用#./configure --without-tcmalloc --without-libatomic-ops命令屏蔽掉libatomic-ops)
checking for LIBEDIT... yes
checking for snappy_compress in -lsnappy... no
configure: error:in`/cwn/ceph/ceph-0.60':
configure: error: libsnappy not found
See `config.log'
for more details.
安装(下载的rpm包):
#rpm -ivh snappy-1.0.5-1.el6.x86_64.rpm
#rpm -ivh snappy-devel-1.0.5-1.el6.x86_64.rpm
checking for snappy_compress in -lsnappy... yes
checking for leveldb_open in -lleveldb... no
configure: error:in`/cwn/ceph/ceph-0.60':
configure: error: libleveldb not found
See `config.log'
for more details.
安装(下载的rpm包):
#rpm -ivh leveldb-1.7.0-2.el6.x86_64.rpm
#rpm -ivh leveldb-devel-1.7.0-2.el6.x86_64.rpm
checking for leveldb_open in -lleveldb... yes
checking for io_submit in -laio... no
configure: error:in`/cwn/ceph/ceph-0.60':
configure: error: libaio not found
See `config.log'
for more details.
安装:#yum install libaio-devel
checking for sys/wait.h that is POSIX.1 compatible... yes
checking boost/spirit/include/classic_core.hpp usability... no
checking boost/spirit/include/classic_core.hpp presence... no
checking for boost/spirit/include/classic_core.hpp... no
checking boost/spirit.hpp usability... no
checking boost/spirit.hpp presence... no
checking for boost/spirit.hpp... no
configure: error:in`/cwn/ceph/ceph-0.60':
configure: error: "Can'
t find boost spirit headers"
See `config.log' for more details.
安装:#yum install boost-devel
checking if more special flags are requiredfor pthreads... no
checking whether to check for GCC pthread/shared inconsistencies... yes
checking whether -pthread is sufficient with -shared... yes
configure: creating ./config.status
config.status: creating Makefile
config.status: creating scripts/gtest-config
config.status: creating build-aux/config.h
config.status: executing depfiles commands
config.status: executing libtool commands
见上说明#./configure --without-tcmalloc命令执行成功,会生成Makefile文件,接下来正式编译:
#make -j8
若过程中报以下错误,说明expat-devel没安装:
CXX osdmaptool.o
CXXLD osdmaptool
CXX ceph_dencoder-ceph_dencoder.o
test/encoding/ceph_dencoder.cc: In function'int main(int, const char**)':
test/encoding/ceph_dencoder.cc:196: note: variable tracking size limit exceeded with-fvar-tracking-assignments, retrying without
CXX ceph_dencoder-rgw_dencoder.o
In file included from rgw/rgw_dencoder.cc:6:
rgw/rgw_acl_s3.h:9:19: error: expat.h: No such file or directory
In file included from rgw/rgw_acl_s3.h:12,
from rgw/rgw_dencoder.cc:6:
rgw/rgw_xml.h:62: error: 'XML_Parser' does not name a type
make[3]:*** [ceph_dencoder-rgw_dencoder.o] Error1
make[3]: Leaving directory`/cwn/ceph/ceph-0.60/src'
make[2]: *** [all-recursive] Error 1
make[2]: Leaving directory `/cwn/ceph/ceph-0.60/src'

make[1]: *** [all] Error2
make[1]: Leaving directory`/cwn/ceph/ceph-0.60/src'
make: *** [all-recursive] Error 1
安装:#yum install expat-devel
CXXLD ceph-dencoder
CXXLD cephfs
CXXLD librados-config
CXXLD ceph-fuse
CCLD rbd-fuse
CCLD mount.ceph
CXXLD rbd
CXXLD rados
CXXLD ceph-syn
make[3]: Leaving directory`/cwn/ceph/ceph-0.60/src'
make[2]: Leaving directory `/cwn/ceph/ceph-0.60/src'

make[1]: Leaving directory`/cwn/ceph/ceph-0.60/src'
Making all in man
make[1]: Entering directory `/cwn/ceph/ceph-0.60/man'

make[1]: Nothing to bedonefor`all'.
make[1]: Leaving directory `/cwn/ceph/ceph-0.60/man'
见上即编译成功,再安装ceph即可:
#make install
libtool: install: ranlib/usr/local/lib/rados-classes/libcls_kvs.a
libtool: finish: PATH="/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/bin:/sbin" ldconfig-n/usr/local/lib/rados-classes
----------------------------------------------------------------------
Libraries have been installed in:
/usr/local/lib/rados-classes

If you ever happen to want to link against installed libraries
in a given directory, LIBDIR, you must either use libtool,and
specify the full pathname of the library, or use the`-LLIBDIR'
flag during linking and do at least one of the following:
- add LIBDIR to the `LD_LIBRARY_PATH'
environment variable
during execution
- add LIBDIR to the `LD_RUN_PATH' environment variable
during linking
- use the `-Wl,-rpath -Wl,LIBDIR'
linker flag
- have your system administrator add LIBDIR to`/etc/ld.so.conf'

See any operating system documentation about shared libraries for
more information, such as the ld(1) and ld.so(8) manual pages.
----------------------------------------------------------------------
test -z "/usr/local/lib/ceph" || /bin/mkdir -p "/usr/local/lib/ceph"
/usr/bin/install -c ceph_common.sh '
/usr/local/lib/ceph'
make[4]: Leaving directory `/cwn/ceph/ceph-0.60/src'

make[3]: Leaving directory`/cwn/ceph/ceph-0.60/src'
make[2]: Leaving directory `/cwn/ceph/ceph-0.60/src'

make[1]: Leaving directory`/cwn/ceph/ceph-0.60/src'
Making install in man
make[1]: Entering directory `/cwn/ceph/ceph-0.60/man'

make[2]: Entering directory`/cwn/ceph/ceph-0.60/man'
make[2]: Nothing to be done for `install-exec-am'
.
test -z "/usr/local/share/man/man8" || /bin/mkdir-p"/usr/local/share/man/man8"
/usr/bin/install-c-m644 ceph-osd.8 ceph-mds.8 ceph-mon.8 mkcephfs.8 ceph-fuse.8 ceph-syn.8 crushtool.8 osdmaptool.8 monmaptool.8 ceph-conf.8 ceph-run.8 ceph.8 mount.ceph.8 radosgw.8 radosgw-admin.8 ceph-authtool.8 rados.8 librados-config.8 rbd.8 ceph-clsinfo.8 ceph-debugpack.8 cephfs.8 ceph-dencoder.8 ceph-rbdnamer.8 rbd-fuse.8'/usr/local/share/man/man8'
make[2]: Leaving directory`/cwn/ceph/ceph-0.60/man'
make[1]: Leaving directory `/cwn/ceph/ceph-0.60/man'
到此,ceph的编译安装全部成功。


3. 配置ceph
除客户端外,其它的节点都需一个配置文件ceph.conf,并需要是完全一样的这个文件要位于/etc/ceph下面,如果在./configure时没有修改prefix的话,则应该是在/usr/local/etc/ceph下。
#cp ./src/sample.* /usr/local/etc/ceph/
#mv /usr/local/etc/ceph/sample.ceph.conf /usr/local/etc/ceph/ceph.conf
#mv /usr/local/etc/ceph/sample.fetch_config /usr/local/etc/ceph/fetch_config
#cp ./src/init-ceph /etc/init.d/ceph
#mkdir /var/log/ceph //存放log,现在ceph自己还不自动建这个目录
注:
①部署每台服务器,主要修改的就是/usr/local/etc/ceph/下的两个文件ceph.conf(ceph集群配置文件)和fetch_config(同步脚本,用于同步各节点的ceph.conf文件,具体方法是scp远程拷贝,但我发现没啥用,所以后来自行写了个脚本)。
②针对osd,除了要加载btrfs模块,还要安装btrfs-progs(#yum install btrfs-progs),这样才有mkfs.btrfs命令。另外就是要在数据节点osd上创建分区或逻辑卷供ceph使用:可以是磁盘分区(如/dev/sda2),也可以是逻辑卷(如/dev/mapper/VolGroup-lv_ceph),只要与配置文件ceph.conf中写的一致即可。具体创建分区或逻辑卷的方法请自行google。
[root@ceph_mds ceph]# cat /usr/local/etc/ceph/ceph.conf
;
; Sample ceph ceph.conf file.
;
; This file defines cluster membership, the various locations
; that Ceph stores data, and any other runtime options.

; If a 'host' is defined for a daemon, the init.d start/stop script will
; verify that it matches the hostname (or else ignore it). If it is
; not defined, it is assumed that the daemon is intended to start on
; the current host (e.g., in a setup with a startup.conf on each
; node).

; The variables $type, $id and $name are available to usein paths
; $type = The type of daemon, possible values: mon, mdsand osd
; $id = The ID of the daemon, for mon.alpha, $id will be alpha
; $name = $type.$id

; For example:
; osd.0
; $type = osd
; $id = 0
; $name = osd.0

; mon.beta
; $type = mon
; $id = beta
; $name = mon.beta

; global
[global]
; enable secure authentication
; auth supported = cephx

; allow ourselves to open a lot of files
max open files = 131072

; set log file
log file = /var/log/ceph/$name.log
; log_to_syslog = true ; uncomment this line to log to syslog

; set up pid files
pid file = /var/run/ceph/$name.pid

; If you want to run a IPv6 cluster, set this to true. Dual-stack isn't possible
;ms bind ipv6 = true

; monitors
; You need at least one. You need at least three if you want to
; tolerate any node failures. Always create an odd number.
[mon]
mon data = /data/mon$id

; If you are using for example the RADOS Gateway and want to have your newly created
; pools a higher replication level, you can set a default
;osd pool default size = 3

; You can also specify a CRUSH rule for new pools
; Wiki: http://ceph.newdream.net/wiki/Custom_data_placement_with_CRUSH
;osd pool default crush rule = 0

; Timing is critical for monitors, but if you want to allow the clocks to drift a
; bit more, you can specify the max drift.
;mon clock drift allowed = 1

; Tell the monitor to backoff from this warning for 30 seconds
;mon clock drift warn backoff = 30

; logging, for debugging monitor crashes, in order of
; their likelihood of being helpful :)
debug ms = 1
;debug mon = 20
;debug paxos = 20
;debug auth = 20

[mon.0]
host = ceph_mds
mon addr = 222.31.76.178:6789

; mds
; You need at least one. Define two to get a standby.
[mds]
; where the mds keeps it'
s secret encryption keys
keyring = /data/keyring.$name

; mds logging to debug issues.
;debug ms = 1
;debug mds = 20

[mds.alpha]
host = ceph_mds

; osd
; You need at least one. Two if you want data to be replicated.
; Define as many as you like.
[osd]
sudo = true
; This is where the osd expects its data
osd data = /data/osd$id

; Ideally, make the journal a separate disk or partition.
; 1-10GB should be enough; moreif you have fastor many
; disks. You can use a file under the osd data dir if need be
; (e.g. /data/$name/journal), but it will be slower than a
; separate disk or partition.
; This is an example of a file-based journal.
osd journal = /data/$name/journal
osd journal size = 1000 ; journal size, in megabytes

; If you want to run the journal on a tmpfs (don't), disable DirectIO
;journal dio = false

; You can change the number of recovery operations to speed up recovery
; or slow it down if your machines can'
t handle it
; osd recovery max active = 3

; osd logging to debug osd issues, in order of likelihood of being
; helpful
;debug ms = 1
;debug osd = 20
;debug filestore = 20
;debug journal = 20


; ### The below options only apply if you're using mkcephfs
; ### and the devs options
; The filesystem used on the volumes
osd mkfs type = btrfs
; If you want to specify some other mount options, you can do so.
; for other filesystems use 'osd mount options $fstype'
osd mount options btrfs = rw,noatime
; The options used to format the filesystem via mkfs.$fstype
; for other filesystems use 'osd mkfs options $fstype'
; osd mkfs options btrfs =


[osd.0]
host = ceph_osd0

; if 'devs' is not specified, you're responsible for
; setting up the '
osd data' dir.
btrfs devs = /dev/mapper/VolGroup-lv_ceph

[osd.1]
host = ceph_osd1

btrfs devs = /dev/mapper/VolGroup-lv_ceph

[osd.2]
host = ceph_osd2

btrfs devs = /dev/mapper/VolGroup-lv_ceph



4. 配置网络
① 修改各节点的hostname,并能够通过hostname来互相访问
参考:http://soft.chinabyte.com/os/281/11563281.shtml
修改/etc/sysconfig/network文件以重定义自己的hostname;
修改/etc/hosts文件以标识其他节点的hostname与IP的映射关系;
重启主机后用hostname命令查看验证。
② 各节点能够ssh互相访问而不输入密码
原理就是公私钥机制,我要访问别人,那么就要把自己的公钥先发给别人,对方就能通过我的公钥验证我的身份。
例:在甲节点执行如下命令
#ssh-keygen –d
该命令会在"~/.ssh"下面生成几个文件,这里有用的是id_dsa.pub,为该节点(甲)的公钥,然后把里面的内容添加到对方节点(乙) "~/.ssh/"目录下的authorized_keys文件中,如果没有则创建一个,这样就从甲节点不需要密码ssh登陆到乙上了.


5.
创建文件系统并启动。以下命令在监控节点进行!
#mkcephfs -a -c /usr/local/etc/ceph/ceph.conf --mkbtrfs
遇以下问题:
(1)scp: /etc/ceph/ceph.conf: No such file or directory
[root@ceph_mds ceph]# mkcephfs -a -c /usr/local/etc/ceph/ceph.conf --mkbtrfs
[/usr/local/etc/ceph/fetch_config/tmp/fetched.ceph.conf.2693]
The authenticity of host 'ceph_mds (127.0.0.1)' can't be established.
RSA key fingerprint is a7:c8:b8:2e:86:ea:89:ff:11:93:e9:29:68:b5:7c:11.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '
ceph_mds' (RSA) to the list of known hosts.
ceph.conf 100% 4436 4.3KB/s 00:00
temp dir is /tmp/mkcephfs.tIHQnX8vkw
preparing monmap in /tmp/mkcephfs.tIHQnX8vkw/monmap
/usr/local/bin/monmaptool --create --clobber --add 0 222.31.76.178:6789 --print /tmp/mkcephfs.tIHQnX8vkw/monmap
/usr/local/bin/monmaptool: monmap file /tmp/mkcephfs.tIHQnX8vkw/monmap
/usr/local/bin/monmaptool: generated fsid f998ee83-9eba-4de2-94e3-14f235ef840c
epoch 0
fsid f998ee83-9eba-4de2-94e3-14f235ef840c
last_changed 2013-05-31 08:22:52.972189
created 2013-05-31 08:22:52.972189
0: 222.31.76.178:6789/0 mon.0
/usr/local/bin/monmaptool: writing epoch 0 to /tmp/mkcephfs.tIHQnX8vkw/monmap (1 monitors)
=== osd.0 ===
pushing conf and monmap to ceph_osd0:/tmp/mkfs.ceph.0b3c65941572123eb704d9d614411fc1
scp: /etc/ceph/ceph.conf: No such file or directory
解决:编写一个脚本,将配置文件同步到/etc/ceph和/usr/local/etc/ceph目录下(需手动先建立/etc/ceph目录):
[root@ceph_mds ceph]# cat cp_ceph_conf.sh
cp /usr/local/etc/ceph/ceph.conf /etc/ceph/ceph.conf
scp /usr/local/etc/ceph/ceph.conf root@ceph_osd0:/usr/local/etc/ceph/ceph.conf
scp /usr/local/etc/ceph/ceph.conf root@ceph_osd0:/etc/ceph/ceph.conf
scp /usr/local/etc/ceph/ceph.conf root@ceph_osd1:/usr/local/etc/ceph/ceph.conf
scp /usr/local/etc/ceph/ceph.conf root@ceph_osd1:/etc/ceph/ceph.conf
scp /usr/local/etc/ceph/ceph.conf root@ceph_osd2:/usr/local/etc/ceph/ceph.conf
scp /usr/local/etc/ceph/ceph.conf root@ceph_osd2:/etc/ceph/ceph.conf
(2)
[root@ceph_mds ceph]# mkcephfs -a -c /usr/local/etc/ceph/ceph.conf --mkbtrfs
temp dir is /tmp/mkcephfs.hz1EcPJjtu
preparing monmap in /tmp/mkcephfs.hz1EcPJjtu/monmap
/usr/local/bin/monmaptool--create--clobber--add0222.31.76.178:6789--print/tmp/mkcephfs.hz1EcPJjtu/monmap
/usr/local/bin/monmaptool: monmap file/tmp/mkcephfs.hz1EcPJjtu/monmap
/usr/local/bin/monmaptool: generated fsid62fdb8b1-8d98-42f2-9cef-b95e2ad7bd43
epoch 0
fsid 62fdb8b1-8d98-42f2-9cef-b95e2ad7bd43
last_changed 2013-05-3108:39:48.198656
created 2013-05-3108:39:48.198656
0: 222.31.76.178:6789/0 mon.0
/usr/local/bin/monmaptool: writing epoch0 to/tmp/mkcephfs.hz1EcPJjtu/monmap (1 monitors)
=== osd.0===
pushing conf and monmap to ceph_osd0:/tmp/mkfs.ceph.2e991ed41f1cdca1149725615a96d0be
umount: /data/osd0:not mounted
umount: /dev/mapper/VolGroup-lv_ceph:not mounted

WARNING! - Btrfs Btrfs v0.20-rc1 IS EXPERIMENTAL
WARNING! - see http://btrfs.wiki.kernel.org before using

fs created label (null) on /dev/mapper/VolGroup-lv_ceph
nodesize 4096 leafsize 4096 sectorsize 4096 size 100.00GB
Btrfs Btrfs v0.20-rc1
2013-05-31 12:39:04.073438 7f02cd9ac760 -1 filestore(/data/osd0) could not find 23c2fcde/osd_superblock/0//-1 in index: (2) No such file or directory
2013-05-3112:39:04.3620107f02cd9ac760-1 created object store/data/osd0 journal/data/osd.0/journalfor osd.0 fsid62fdb8b1-8d98-42f2-9cef-b95e2ad7bd43
2013-05-31 12:39:04.362074 7f02cd9ac760 -1 auth: error reading file: /data/osd0/keyring: can't open /data/osd0/keyring: (2) No such file or directory
2013-05-31 12:39:04.362280 7f02cd9ac760 -1 created new key in keyring /data/osd0/keyring
collecting osd.0 key

=== osd.1 ===
pushing conf and monmap to ceph_osd1:/tmp/mkfs.ceph.9a9f67ff6e7516b415d30f0a89bfe0dd
umount: /data/osd1: not mounted
umount: /dev/mapper/VolGroup-lv_ceph: not mounted

WARNING! - Btrfs Btrfs v0.20-rc1 IS EXPERIMENTAL
WARNING! - see http://btrfs.wiki.kernel.org before using

fs created label (null) on /dev/mapper/VolGroup-lv_ceph
nodesize 4096 leafsize 4096 sectorsize 4096 size 100.00GB
Btrfs Btrfs v0.20-rc1
2013-05-31 08:39:13.237718 7ff0a2fe4760 -1 filestore(/data/osd1) could not find 23c2fcde/osd_superblock/0//-1 in index: (2) No such file or directory
2013-05-31 08:39:13.524175 7ff0a2fe4760 -1 created object store /data/osd1 journal /data/osd.1/journal for osd.1 fsid 62fdb8b1-8d98-42f2-9cef-b95e2ad7bd43
2013-05-31 08:39:13.524241 7ff0a2fe4760 -1 auth: error reading file: /data/osd1/keyring: can'
t open/data/osd1/keyring: (2) No such fileor directory
2013-05-3108:39:13.5244307ff0a2fe4760-1 created new keyin keyring/data/osd1/keyring
collecting osd.1 key
=== osd.2===
pushing conf and monmap to ceph_osd2:/tmp/mkfs.ceph.51a8af4b24b311fcc2d47eed2cd714ca
umount: /data/osd2:not mounted
umount: /dev/mapper/VolGroup-lv_ceph:not mounted

WARNING! - Btrfs Btrfs v0.20-rc1 IS EXPERIMENTAL
WARNING! - see http://btrfs.wiki.kernel.org before using

fs created label (null) on /dev/mapper/VolGroup-lv_ceph
nodesize 4096 leafsize 4096 sectorsize 4096 size 100.00GB
Btrfs Btrfs v0.20-rc1
2013-05-3109:01:49.3718537ff422eb1760-1 filestore(/data/osd2) couldnot find23c2fcde/osd_superblock/0//-1in index: (2) No such fileor directory
2013-05-3109:01:49.5830617ff422eb1760-1 created object store/data/osd2 journal/data/osd.2/journalfor osd.2 fsid62fdb8b1-8d98-42f2-9cef-b95e2ad7bd43
2013-05-3109:01:49.5831237ff422eb1760-1 auth: error reading file:/data/osd2/keyring: can't open /data/osd2/keyring: (2) No such file or directory
2013-05-31 09:01:49.583312 7ff422eb1760 -1 created new key in keyring /data/osd2/keyring
collecting osd.2 key
=== mds.alpha ===
creating private key for mds.alpha keyring /data/keyring.mds.alpha
creating /data/keyring.mds.alpha
bufferlist::write_file(/data/keyring.mds.alpha): failed to open file: (2) No such file or directory
could not write /data/keyring.mds.alpha
can't open /data/keyring.mds.alpha: can
't open /data/keyring.mds.alpha: (2) No such file or directory
failed: '/usr/local/sbin/mkcephfs -d /tmp/mkcephfs.hz1EcPJjtu --init-daemon mds.alpha'
解决:手动建立这个文件:
#mkdir /data
#touch /data/keyring.mds.alpha
【创建成功】
[root@ceph_mds ceph]# mkcephfs -a -c /usr/local/etc/ceph/ceph.conf --mkbtrfs
temp dir is /tmp/mkcephfs.v9vb0zOmJ5
preparing monmap in /tmp/mkcephfs.v9vb0zOmJ5/monmap
/usr/local/bin/monmaptool--create--clobber--add0222.31.76.178:6789--print/tmp/mkcephfs.v9vb0zOmJ5/monmap
/usr/local/bin/monmaptool: monmap file/tmp/mkcephfs.v9vb0zOmJ5/monmap
/usr/local/bin/monmaptool: generated fsid652b09fb-bbbf-424c-bd49-8218d75465ba
epoch 0
fsid 652b09fb-bbbf-424c-bd49-8218d75465ba
last_changed 2013-05-3108:50:21.797571
created 2013-05-3108:50:21.797571
0: 222.31.76.178:6789/0 mon.0
/usr/local/bin/monmaptool: writing epoch0 to/tmp/mkcephfs.v9vb0zOmJ5/monmap (1 monitors)
=== osd.0===
pushing conf and monmap to ceph_osd0:/tmp/mkfs.ceph.8912ed2e34cfd2477c2549354c03faa3
umount: /dev/mapper/VolGroup-lv_ceph:not mounted

WARNING! - Btrfs Btrfs v0.20-rc1 IS EXPERIMENTAL
WARNING! - see http://btrfs.wiki.kernel.org before using

fs created label (null) on /dev/mapper/VolGroup-lv_ceph
nodesize 4096 leafsize 4096 sectorsize 4096 size 100.00GB
Btrfs Btrfs v0.20-rc1
2013-05-3112:49:36.5483297f67d293e760-1 journal check: ondisk fsid919417f1-0a79-4463-903c-3fc9df8ca0f8 doesn't match expected 3b3d2772-4981-46fd-bbcd-b11957c77d47, invalid (someone else's?) journal
2013-05-3112:49:36.9536667f67d293e760-1 filestore(/data/osd0) couldnot find23c2fcde/osd_superblock/0//-1in index: (2) No such fileor directory
2013-05-3112:49:37.2443347f67d293e760-1 created object store/data/osd0 journal/data/osd.0/journalfor osd.0 fsid652b09fb-bbbf-424c-bd49-8218d75465ba
2013-05-3112:49:37.2443977f67d293e760-1 auth: error reading file:/data/osd0/keyring: can't open /data/osd0/keyring: (2) No such file or directory
2013-05-31 12:49:37.244580 7f67d293e760 -1 created new key in keyring /data/osd0/keyring
collecting osd.0 key
=== osd.1 ===
pushing conf and monmap to ceph_osd1:/tmp/mkfs.ceph.69d388555243635efea3c5976d001b64
umount: /dev/mapper/VolGroup-lv_ceph: not mounted

WARNING! - Btrfs Btrfs v0.20-rc1 IS EXPERIMENTAL
WARNING! - see http://btrfs.wiki.kernel.org before using

fs created label (null) on /dev/mapper/VolGroup-lv_ceph
nodesize 4096 leafsize 4096 sectorsize 4096 size 100.00GB
Btrfs Btrfs v0.20-rc1
2013-05-31 08:49:45.012858 7f82a3d52760 -1 journal check: ondisk fsid 28f23b77-6f77-47b3-b946-7eda652d4488 doesn'
t match expected65a75a4f-b639-4eab-91d6-00c985118862, invalid (someone else's?) journal
2013-05-31 08:49:45.407962 7f82a3d52760 -1 filestore(/data/osd1) could not find 23c2fcde/osd_superblock/0//-1 in index: (2) No such file or directory
2013-05-31 08:49:45.696990 7f82a3d52760 -1 created object store /data/osd1 journal /data/osd.1/journal for osd.1 fsid 652b09fb-bbbf-424c-bd49-8218d75465ba
2013-05-31 08:49:45.697052 7f82a3d52760 -1 auth: error reading file: /data/osd1/keyring: can'
t open/data/osd1/keyring: (2) No such fileor directory
2013-05-3108:49:45.6972387f82a3d52760-1 created new keyin keyring/data/osd1/keyring
collecting osd.1 key
=== osd.2===
pushing conf and monmap to ceph_osd2:/tmp/mkfs.ceph.686b9d63c840a05a6eed5b5781f10b27
umount: /dev/mapper/VolGroup-lv_ceph:not mounted

WARNING! - Btrfs Btrfs v0.20-rc1 IS EXPERIMENTAL
WARNING! - see http://btrfs.wiki.kernel.org before using

fs created label (null) on /dev/mapper/VolGroup-lv_ceph
nodesize 4096 leafsize 4096 sectorsize 4096 size 100.00GB
Btrfs Btrfs v0.20-rc1
2013-05-3109:12:20.7087337fa54ae8f760-1 journal check: ondisk fsid dc21285e-3bde-4f53-9424-d059540ab920 doesn't match expected cae83f10-d633-48d1-b324-a64849eca974, invalid (someone else's?) journal
2013-05-3109:12:21.0571547fa54ae8f760-1 filestore(/data/osd2) couldnot find23c2fcde/osd_superblock/0//-1in index: (2) No such fileor directory
2013-05-3109:12:21.2536897fa54ae8f760-1 created object store/data/osd2 journal/data/osd.2/journalfor osd.2 fsid652b09fb-bbbf-424c-bd49-8218d75465ba
2013-05-3109:12:21.2537497fa54ae8f760-1 auth: error reading file:/data/osd2/keyring: can't open /data/osd2/keyring: (2) No such file or directory
2013-05-31 09:12:21.253931 7fa54ae8f760 -1 created new key in keyring /data/osd2/keyring
collecting osd.2 key
=== mds.alpha ===
creating private key for mds.alpha keyring /data/keyring.mds.alpha
creating /data/keyring.mds.alpha
Building generic osdmap from /tmp/mkcephfs.v9vb0zOmJ5/conf
/usr/local/bin/osdmaptool: osdmap file '
/tmp/mkcephfs.v9vb0zOmJ5/osdmap'
/usr/local/bin/osdmaptool: writing epoch 1 to /tmp/mkcephfs.v9vb0zOmJ5/osdmap
Generating admin key at /tmp/mkcephfs.v9vb0zOmJ5/keyring.admin
creating /tmp/mkcephfs.v9vb0zOmJ5/keyring.admin
Building initial monitor keyring
added entity mds.alpha auth auth(auid = 18446744073709551615 key=AQCXnKhRiL/QHhAA091/MQGD25V54smKBz959w== with 0 caps)
added entity osd.0 auth auth(auid = 18446744073709551615 key=AQDhK6hROEuRDhAA9uCsjB++Szh8sJy3CUgeoA== with 0 caps)
added entity osd.1 auth auth(auid = 18446744073709551615 key=AQBpnKhR0EKMKRAAzNWvZgkDWrPSuZaSttBdsw== with 0 caps)
added entity osd.2 auth auth(auid = 18446744073709551615 key=AQC1oahReP4fDxAAR0R0HTNfVbs6VMybLIU9qg== with 0 caps)
=== mon.0 ===
/usr/local/bin/ceph-mon: created monfs at /data/mon0 for mon.0
placing client.admin keyring in /etc/ceph/keyring
【启动】
#/etc/init.d/ceph -a start  //必要时先关闭防火墙(#service iptables stop)
[root@ceph_mds ceph]# /etc/init.d/ceph -a start
=== mon.0===
Starting Ceph mon.0 on ceph_mds...
starting mon.0 rank 0 at 222.31.76.178:6789/0 mon_data /data/mon0 fsid652b09fb-bbbf-424c-bd49-8218d75465ba
=== mds.alpha===
Starting Ceph mds.alpha on ceph_mds...
starting mds.alpha at :/0
=== osd.0===
Mounting Btrfs on ceph_osd0:/data/osd0
Scanning for Btrfs filesystems
Starting Ceph osd.0 on ceph_osd0...
starting osd.0 at :/0 osd_data/data/osd0/data/osd.0/journal
=== osd.1===
Mounting Btrfs on ceph_osd1:/data/osd1
Scanning for Btrfs filesystems
Starting Ceph osd.1 on ceph_osd1...
starting osd.1 at :/0 osd_data/data/osd1/data/osd.1/journal
=== osd.2===
Mounting Btrfs on ceph_osd2:/data/osd2
Scanning for Btrfs filesystems
Starting Ceph osd.2 on ceph_osd2...
starting osd.2 at :/0 osd_data/data/osd2/data/osd.2/journal

【查看Ceph集群状态】
[root@ceph_mds ceph]# ceph -s
health HEALTH_OK
monmap e1: 1 mons at {0=222.31.76.178:6789/0}, election epoch 2, quorum 0 0
osdmap e7: 3 osds:3 up,3in
pgmap v432: 768 pgs:768 active+clean;9518 bytes data, 16876 KB used,293 GB/300 GB avail
mdsmap e4: 1/1/1 up {0=alpha=up:active}
[root@ceph_mds ceph]# ceph df
GLOBAL:
SIZE AVAIL RAW USED %RAW USED
300M 293M 16876 0

POOLS:
NAME ID USED %USED OBJECTS
data 0 0 0 0
metadata 1 9518021
rbd 2 0 0 0
疑问:空间统计有问题吧?!"ceph -s"查看是300GB,"ceph df"查看是300M。


6. 客户端挂载
#mkdir /mnt/ceph
#mount -t ceph ceph_mds:/ /mnt/ceph
遇以下错误:
(1)
[root@localhost ~]# mount -t ceph ceph_mds:/ /mnt/ceph/
mount: wrong fs type, bad option, bad superblock on ceph_mds:/,
missing codepage or helper program, or other error
(for several filesystems (e.g. nfs, cifs) you might
need a /sbin/mount.<type> helper program)
In some cases useful info is found in syslog- try
dmesg | tail or so
查看#dmesg
ceph: Unknown symbol ceph_con_keepalive (err0)
ceph: Unknown symbol ceph_create_client (err 0)
ceph: Unknown symbol ceph_calc_pg_primary (err0)
ceph: Unknown symbol ceph_osdc_release_request (err0)
ceph: Unknown symbol ceph_con_open (err 0)
ceph: Unknown symbol ceph_flags_to_mode (err 0)
ceph: Unknown symbol ceph_msg_last_put (err 0)
ceph: Unknown symbol ceph_caps_for_mode (err 0)
ceph: Unknown symbol ceph_copy_page_vector_to_user (err0)
ceph: Unknown symbol ceph_msg_new (err 0)
ceph: Unknown symbol ceph_msg_type_name (err 0)
ceph: Unknown symbol ceph_pagelist_truncate (err0)
ceph: Unknown symbol ceph_release_page_vector (err0)
ceph: Unknown symbol ceph_check_fsid (err 0)
ceph: Unknown symbol ceph_pagelist_reserve (err0)
ceph: Unknown symbol ceph_pagelist_append (err0)
ceph: Unknown symbol ceph_calc_object_layout (err0)
ceph: Unknown symbol ceph_get_direct_page_vector (err0)
ceph: Unknown symbol ceph_osdc_wait_request (err0)
ceph: Unknown symbol ceph_osdc_new_request (err0)
ceph: Unknown symbol ceph_pagelist_set_cursor (err0)
ceph: Unknown symbol ceph_calc_file_object_mapping (err0)
ceph: Unknown symbol ceph_monc_got_mdsmap (err0)
ceph: Unknown symbol ceph_osdc_readpages (err 0)
ceph: Unknown symbol ceph_con_send (err 0)
ceph: Unknown symbol ceph_zero_page_vector_range (err0)
ceph: Unknown symbol ceph_osdc_start_request (err0)
ceph: Unknown symbol ceph_compare_options (err0)
ceph: Unknown symbol ceph_msg_dump (err 0)
ceph: Unknown symbol ceph_buffer_new (err 0)
ceph: Unknown symbol ceph_put_page_vector (err0)
ceph: Unknown symbol ceph_pagelist_release (err0)
ceph: Unknown symbol ceph_osdc_sync (err 0)
ceph: Unknown symbol ceph_destroy_client (err 0)
ceph: Unknown symbol ceph_copy_user_to_page_vector (err0)
ceph: Unknown symbol __ceph_open_session (err 0)
ceph: Unknown symbol ceph_alloc_page_vector (err0)
ceph: Unknown symbol ceph_monc_do_statfs (err 0)
ceph: Unknown symbol ceph_monc_validate_auth (err0)
ceph: Unknown symbol ceph_osdc_writepages (err0)
ceph: Unknown symbol ceph_parse_options (err 0)
ceph: Unknown symbol ceph_str_hash (err 0)
ceph: Unknown symbol ceph_pr_addr (err 0)
ceph: Unknown symbol ceph_buffer_release (err 0)
ceph: Unknown symbol ceph_con_init (err 0)
ceph: Unknown symbol ceph_destroy_options (err0)
ceph: Unknown symbol ceph_con_close (err 0)
ceph: Unknown symbol ceph_msgr_flush (err 0)
Key type ceph registered
libceph: loaded (mon/osd proto15/24, osdmap5/65/6)
ceph: loaded (mds proto 32)
libceph: parse_ips bad ip 'ceph_mds'
ceph: loaded (mds proto 32)
libceph: parse_ips bad ip 'ceph_mds'
我发现客户端mount命令根本没有ceph类型(无"mount.ceph"),而我们配置的其他节点都有mount.ceph,所以我在ceph客户端上也重新编译了最新版的ceph-0.60。
(2)编译安装ceph-0.60后mount还是报同样的错,查看dmesg
#dmesg | tail
Key type ceph unregistered
Key type ceph registered
libceph: loaded (mon/osd proto15/24, osdmap5/65/6)
ceph: loaded (mds proto 32)
libceph: parse_ips bad ip 'ceph_mds'
libceph: no secret set (for auth_x protocol)
libceph: error -22 on auth protocol 2 init

libceph: client4102 fsid 652b09fb-bbbf-424c-bd49-8218d75465ba

最终查明原因,是因为mount时还需要输入用户名和密钥,具体mount命令为:
#mount.ceph ceph_mds:/ /mnt/ceph -v -o name=admin,secret=AQCXnKhRgMltJRAAi0WMqr+atKFPaIV4Aja4hQ==
[root@localhost ~]# mount.ceph ceph_mds:/ /mnt/ceph -v -o name=admin,secret=AQCXnKhRgMltJRAAi0WMqr+atKFPaIV4Aja4hQ==
parsing options: name=admin,secret=AQCXnKhRgMltJRAAi0WMqr+atKFPaIV4Aja4hQ==
上述命令中的name和secret参数值来自monitor的/etc/ceph/keyring文件:
[root@ceph_mds ceph]# cat /etc/ceph/keyring
[client.admin]
key = AQCXnKhRgMltJRAAi0WMqr+atKFPaIV4Aja4hQ==
查看客户端的挂载情况:
[root@localhost ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroup-lv_root
50G 13G 35G 27%/
tmpfs 2.0G 02.0G0%/dev/shm
/dev/sda1 477M 48M 405M11%/boot
/dev/mapper/VolGroup-lv_home
405G 71M 385G 1%/home
222.31.76.178:/ 300G 6.1G 294G 3% /mnt/ceph

P.S. 网上说若不想每次输入密钥这么繁琐,可以在配置文件ceph.conf中加入以下字段(并记得同步到其他节点),但我实验发现还是无效,所以暂且采用上述方法挂载使用,有哪位朋友知道我错在哪欢迎指出啊。
[mount /]
     allow = %everyone


到此Ceph的安装配置就已全部完成,可以在客户端的/mnt/ceph目录下使用Ceph分布式文件系统。
近期我还将进行Ceph的功能性验证测试,测试报告敬请期待!






---------------------------------------------------------------------
【参考】
1. 安装过程
ceph的安装过程 :http://blog.csdn.net/wilcke/article/details/8089337
centos 6.2 64位上安装ceph 0.47.2: http://blog.csdn.net/frank0712105003/article/details/7631035
Install Ceph on CentOS 5.5 :http://blog.csdn.net/gurad2008/article/details/6270804
2. 配置过程
http://blog.csdn.net/wujieyhy2006/article/details/6448168
http://blog.csdn.net/polisan/article/details/5624207
Ceph环境配置文档:http://wenku.baidu.com/view/a8604b523c1ec5da50e2706b.html



转载注明地址:http://www.chengxuyuans.com/Linux/63563.html