This is a bare-bones guide on how to setup a Ceph cluster using ceph-ansible
This guide assumes:
- Ansible is installed on your local machine
- Eight Centos 7.2 nodes are provisioned and formatted with XFS
- You have ssh and sudo access to your nodes
First, clone the ceph-ansible repo.
git clone https://github.com/ceph/ceph-ansible.git
cd ceph-ansible
git checkout stable-3.1
pip install ansible==2.5.0.0
master req = ansible 2.6.0.0
Next, add an inventory
file to the base of the repo. Using the following contents as a guide, replace the example IPs with your own machine IPs.
[mgrs]
ceph-1 ansible_ssh_host=192.168.16.87 local_as=ceph-1
ceph-2 ansible_ssh_host=192.168.16.88 local_as=ceph-2
ceph-3 ansible_ssh_host=192.168.16.89 local_as=ceph-3
[mons]
ceph-1 ansible_ssh_host=192.168.16.87 local_as=ceph-1
ceph-2 ansible_ssh_host=192.168.16.88 local_as=ceph-2
ceph-3 ansible_ssh_host=192.168.16.89 local_as=ceph-3
[osds]
ceph-1 ansible_ssh_host=192.168.16.87 local_as=ceph-1
ceph-2 ansible_ssh_host=192.168.16.88 local_as=ceph-2
ceph-3 ansible_ssh_host=192.168.16.89 local_as=ceph-3
[clients]
ceph-1 ansible_ssh_host=192.168.16.87 local_as=ceph-1
ceph-2 ansible_ssh_host=192.168.16.88 local_as=ceph-2
ceph-3 ansible_ssh_host=192.168.16.89 local_as=ceph-3
We need a minimum of 3 MONs (cluster state daemons) 4 OSDs (object storage daemons) and 1 client (to mount the storage).
Next, we need to copy the samples in place.
cp site.yml.sample site.yml
cp group_vars/all.yml.sample group_vars/all.yml
cp group_vars/mgrs.yml.sample group_vars/mgrs.yml
cp group_vars/mons.yml.sample group_vars/mons.yml
cp group_vars/osds.yml.sample group_vars/osds.yml
Now, we need to modify some variables in
group_vars/all.yml
:
cluster: ceph-rbd
ceph_origin: repository
ceph_repository: community
ceph_stable_release: luminous
#monitor_interface: eth0
monitor_interface: "{{ ansible_default_ipv4['interface'] }}"
journal_size: 5120
public_network: 192.168.16.0/24
osd_mkfs_type: xfs
groups_vars/mgrs.yml
:
ceph_mgr_modules: [status,dashboard,prometheus]
And in
3. groups_vars/osds.yml
:
osd_scenario: collocated
#devices:
# - /dev/vdb
# or
osd_auto_discovery: true
Once you've created your inventory file and modified your variables:
ansible-playbook site.yml -i inventory
Once done, you should have a functioning Ceph cluster.
For these next steps, we will need to SSH to a machine in your inventory with the client
role and sudo to root.
Most of what you need to know you can see at a glance by using
ceph status
Abnormal statuses will be reported regarding MONs, OSDs, and other daemons/objects
Due to kernel differences, it is safest to create a volume with minimal features. Here we are creating a 1GB volume with layering enabled.
rbd create test --size 1024 --image-feature=layering
Once created, you can map the RBD volume to your client.
rbd map test
Once Mapped, format the volume with your filesystem of choice. Make sure you have the right device per the rbd map
output
mkfs.ext4 /dev/rbd0
mkdir /mnt/test
mount /dev/rbd0 /mnt/test
umount /dev/rbd0
rbd unmap /dev/rbd0
rbd rm test
ceph osd pool create kube 128
ceph osd pool set kube size 1
ceph osd crush tunables legacy
ceph auth get-or-create client.kube mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=kube' -o ceph.client.kube.keyring
ceph osd pool application enable kube rbd
pushd storage-class
ssh root@ceph-1 "ceph auth get-key client.kube | base64" | tee client.kube
ssh root@ceph-1 "ceph auth get-key client.admin | base64" | tee client.admin
export client_kube=`cat client.kube`
export client_admin=`cat client.admin`
yasha --client_kube=${client_kube} step1-ceph-secret-user-ns-default.yaml.j2
yasha --client_kube=$client_kube step2-ceph-secret-user.yaml.j2
yasha --client_admin=$client_admin step3-ceph-secret-admin.yaml.j2
kubectl apply -f step1-ceph-secret-user-ns-default.yaml
kubectl apply -f step2-ceph-secret-user.yaml
kubectl apply -f step3-ceph-secret-admin.yaml
kubectl apply -f step4-storage-class.yaml
popd