微信公众号搜"智元新知"关注
微信扫一扫可直接关注哦!

rpm 安装ceph

环境准备

1、在运行 Ceph 守护进程的节点上创建一个普通用户,ceph-deploy 会在节点安装软件包,所以你创建的用户需要无密码 sudo 权限。如果使用root可以忽略。
为赋予用户所有权限,把下列加入 /etc/sudoers.d/ceph

echo "ceph ALL = (root) nopASSWD:ALL" | tee /etc/sudoers.d/ceph
sudo chmod 0440 /etc/sudoers.d/ceph

2、配置你的管理主机,使之可通过 SSH无密码访问各节点。
3、配置ceph源ceph.repo,这里直接配置163的源,加快安装速度

[Ceph]
name=Ceph packages for $basearch
baseurl=http://mirrors.163.com/ceph/rpm-mimic/el7/$basearch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=http://mirrors.163.com/ceph/keys/release.asc
priority=1


[Ceph-noarch]
name=Ceph noarch packages
baseurl=http://mirrors.163.com/ceph/rpm-mimic/el7/noarch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=http://mirrors.163.com/ceph/keys/release.asc
priority=1


[ceph-source]
name=Ceph source packages
baseurl=http://mirrors.163.com/ceph/rpm-mimic/el7/SRPMS
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=http://mirrors.163.com/ceph/keys/release.asc
priority=1

部署ceph相关软件与集群初始化

4、安装ceph-deploy 只需要在管理节点

sudo yum update && sudo yum install ceph-deploy
安装ceph相关软件以及依赖包
sudo ceph-deploy install qd01-stop-cloud001 qd01-stop-cloud002 qd01-stop-cloud003

注意:这里可以使用yum直接安装ceph相关包
在所有节点执行,如果执行这一步,上面的ceph-deploy install可以不用执行

sudo yum -y install ceph ceph-common rbd-fuse ceph-release python-ceph-compat  python-rbd librbd1-devel ceph-radosgw

5、创建集群

sudo ceph-deploy new qd01-stop-k8s-node001 qd01-stop-k8s-node002 qd01-stop-k8s-node003

修改配置文件ceph.conf

[global]
fsid = ec7ee19a-f7c6-4ed0-9307-f48af473352c
mon_initial_members = qd01-stop-k8s-node001, qd01-stop-k8s-node002, qd01-stop-k8s-node003
mon_host = 10.26.22.105,10.26.22.80,10.26.22.85
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
cluster_network = 10.0.0.0/0
public_network = 10.0.0.0/0

filestore_xattr_use_omap = true
osd_pool_default_size = 3
osd_pool_default_min_size = 1
osd_pool_default_pg_num = 520
osd_pool_default_pgp_num = 520
osd_recovery_op_priority= 10
osd_client_op_priority = 100
osd_op_threads = 20
osd_recovery_max_active = 2
osd_max_backfills = 2
osd_scrub_load_threshold = 1

osd_deep_scrub_interval = 604800000
osd_deep_scrub_stride = 4096

[client]
rbd_cache = true
rbd_cache_size = 134217728
rbd_cache_max_dirty = 125829120

[mon]
mon_allow_pool_delete = true

注意: 在某一主机上新增 Mon 时,如果它不是由 ceph-deploy new 命令所定义的,那就必须把 public network 加入 ceph.conf 配置文件

6、创建初始化mon

ceph-deploy mon create-initial

7、创建OSD,这里只列出一台机器的命令,多台osd替换主机名重复执行即可
初始化磁盘

ceph-deploy  disk zap qd01-stop-k8s-node001 /dev/sdb
ceph-deploy  disk zap qd01-stop-k8s-node001 /dev/sdc
ceph-deploy  disk zap qd01-stop-k8s-node001 /dev/sdd

创建并激活

ceph-deploy osd create   --data  /dev/sdb  qd01-stop-k8s-node001
ceph-deploy osd create   --data  /dev/sdc  qd01-stop-k8s-node001
ceph-deploy osd create   --data  /dev/sdd  qd01-stop-k8s-node001

8、创建管理主机

ceph-deploy mgr create  qd01-stop-k8s-node001 qd01-stop-k8s-node002 qd01-stop-k8s-node003

验证ceph集群状态

9、查看集群状态

[root@qd01-stop-k8s-node001 ~]# ceph -s
  cluster:
    id:     ec7ee19a-f7c6-4ed0-9307-f48af473352c
    health: HEALTH_OK
 
  services:
    mon: 3 daemons, quorum qd01-stop-k8s-node002,qd01-stop-k8s-node003,qd01-stop-k8s-node001
    mgr: qd01-stop-k8s-node001(active), standbys: qd01-stop-k8s-node002, qd01-stop-k8s-node003
    osd: 24 osds: 24 up, 24 in
 
  data:
    pools:   1 pools, 256 pgs
    objects: 5  objects, 325 B
    usage:   24 GiB used, 44 TiB / 44 TiB avail
    pgs:     256 active+clean

10、开启mgr dashboard

ceph mgr module enable dashboard
ceph dashboard create-self-signed-cert
ceph dashboard set-login-credentials admin admin
ceph mgr services

ceph config-key put mgr/dashboard/server_addr 0.0.0.0 绑定IP
ceph config-key put mgr/dashboard/server_port 7000    设置端口
systemctl  restart ceph-mgr@qd01-stop-k8s-node001.service

11、创建pool

创建
ceph osd pool create  k8s 256 256
允许rbd使用pool
ceph osd pool application enable k8s rbd --yes-i-really-mean-it
查看
ceph osd pool ls

12、测试

[root@qd01-stop-k8s-node001 ~]# rbd   create docker_test --size 4096 -p k8s
[root@qd01-stop-k8s-node001 ~]# rbd info docker_test -p k8s
rbd image 'docker_test':
        size 4 GiB in 1024 objects
        order 22 (4 MiB objects)
        id: 11ed6b8b4567
        block_name_prefix: rbd_data.11ed6b8b4567
        format: 2
        features: layering, exclusive-lock, object-map, fast-diff, deep-flatten
        op_features:
        flags:
        create_timestamp: Wed Nov 11 17:19:38 2020

卸载

13、删除节点并删除ceph包

ceph-deploy purgedata  qd01-stop-k8s-node008
ceph-deploy purge qd01-stop-k8s-node008

ceph集群相关操作命令

**健康检查**
ceph -s –conf /etc/ceph/ceph.conf –name client.admin –keyring /etc/ceph/ceph.client.admin.keyring
ceph health
ceph quorum_status –format json-pretty
ceph osd dump
ceph osd stat
ceph mon dump
ceph mon stat
ceph mds dump
ceph mds stat
ceph pg dump
ceph pg stat

**osd/pool**
ceph osd tree
ceph osd pool ls detail
ceph osd pool set rbd crush_ruleset 1
ceph osd pool create sata-pool 256 rule-sata
ceph osd pool create ssd-pool 256 rule-ssd
ceph osd pool set data min_size 2

**配置相关**
ceph daemon osd.0 config show (在osd节点上执行)
ceph daemon osd.0 config set mon_allow_pool_delete true(在osd节点执行,重启后失效)
ceph tell osd.0 config set mon_allow_pool_delete false (任意节点执行,重启后失效)
ceph config set osd.0 mon_allow_pool_delete true(只支持13.x版本,任意节点执行,重启有效,要求配置选项不在配置配置文件中,否则mon会忽略该设置)

**日志相关**
ceph log last 100
map
ceph osd map
ceph pg dump
ceph pg map x.yz
ceph pg x.yz query

**验证**
ceph auth get client.admin –name mon. –keyring /var/lib/ceph/mon/ceph-$hostname/keyring
ceph auth get osd.0
ceph auth get mon.
ceph auth ls

**crush相关**
ceph osd crush add-bucket root-sata root
ceph osd crush add-bucket ceph-1-sata host
ceph osd crush add-bucket ceph-2-sata host
ceph osd crush move ceph-1-sata root=root-sata
ceph osd crush move ceph-2-sata root=root-sata
ceph osd crush add osd.0 2 host=ceph-1-sata
ceph osd crush add osd.1 2 host=ceph-1-sata
ceph osd crush add osd.2 2 host=ceph-2-sata
ceph osd crush add osd.3 2 host=ceph-2-sata

ceph osd crush add-bucket root-ssd root
ceph osd crush add-bucket ceph-1-ssd host
ceph osd crush add-bucket ceph-2-ssd host

ceph osd getcrushmap -o /tmp/crush
crushtool -d /tmp/crush -o /tmp/crush.txt
update /tmp/crush.txt
crushtool -c /tmp/crush.txt -o /tmp/crush.bin
ceph osd setcrushmap -i /tmp/crush.bin
折叠   标签ceph;kubernetescloudnative TRANSLATE with x English
Arabic Hebrew Polish
Bulgarian Hindi Portuguese
Catalan Hmong Daw Romanian
Chinese Simplified Hungarian Russian
Chinese Traditional Indonesian Slovak
Czech Italian Slovenian
Danish Japanese Spanish
Dutch Klingon Swedish
English Korean Thai
Estonian Latvian Turkish
Finnish Lithuanian Ukrainian
French Malay Urdu
German Maltese Vietnamese
Greek Norwegian Welsh
Haitian Creole Persian  
  TRANSLATE with copY THE URL BELOW Back EMbed THE SNIPPET BELOW IN YOUR SITE Enable collaborative features and customize widget: Bing Webmaster Portal Back

版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。

相关推荐