Skip to content

Instantly share code, notes, and snippets.

@radut
Forked from elsonrodriguez/README.md
Last active March 7, 2019 15:45
Show Gist options
  • Save radut/a6561451336dd16ae139c9c53bb5136b to your computer and use it in GitHub Desktop.
Save radut/a6561451336dd16ae139c9c53bb5136b to your computer and use it in GitHub Desktop.
Ceph Ansible Quickstart Guide

Quick and Dirty Ceph Cluster

This is a bare-bones guide on how to setup a Ceph cluster using ceph-ansible

This guide assumes:

  • Ansible is installed on your local machine
  • Eight Centos 7.2 nodes are provisioned and formatted with XFS
  • You have ssh and sudo access to your nodes

Ansible Setup

First, clone the ceph-ansible repo.

git clone https://github.com/ceph/ceph-ansible.git
cd ceph-ansible
git checkout stable-3.1

pip install ansible==2.5.0.0

master req = ansible 2.6.0.0

Next, add an inventory file to the base of the repo. Using the following contents as a guide, replace the example IPs with your own machine IPs.

[mgrs]
ceph-1 ansible_ssh_host=192.168.16.87 local_as=ceph-1
ceph-2 ansible_ssh_host=192.168.16.88 local_as=ceph-2
ceph-3 ansible_ssh_host=192.168.16.89 local_as=ceph-3

[mons]
ceph-1 ansible_ssh_host=192.168.16.87 local_as=ceph-1
ceph-2 ansible_ssh_host=192.168.16.88 local_as=ceph-2
ceph-3 ansible_ssh_host=192.168.16.89 local_as=ceph-3

[osds]
ceph-1 ansible_ssh_host=192.168.16.87 local_as=ceph-1
ceph-2 ansible_ssh_host=192.168.16.88 local_as=ceph-2
ceph-3 ansible_ssh_host=192.168.16.89 local_as=ceph-3



[clients]
ceph-1 ansible_ssh_host=192.168.16.87 local_as=ceph-1
ceph-2 ansible_ssh_host=192.168.16.88 local_as=ceph-2
ceph-3 ansible_ssh_host=192.168.16.89 local_as=ceph-3


We need a minimum of 3 MONs (cluster state daemons) 4 OSDs (object storage daemons) and 1 client (to mount the storage).

Next, we need to copy the samples in place.

cp site.yml.sample site.yml
cp group_vars/all.yml.sample group_vars/all.yml
cp group_vars/mgrs.yml.sample group_vars/mgrs.yml
cp group_vars/mons.yml.sample group_vars/mons.yml
cp group_vars/osds.yml.sample group_vars/osds.yml

Now, we need to modify some variables in

  1. group_vars/all.yml:
cluster: ceph-rbd
ceph_origin: repository 
ceph_repository: community
ceph_stable_release: luminous

#monitor_interface: eth0
monitor_interface: "{{ ansible_default_ipv4['interface'] }}"
 
journal_size: 5120

public_network: 192.168.16.0/24

osd_mkfs_type: xfs
  1. groups_vars/mgrs.yml:
ceph_mgr_modules: [status,dashboard,prometheus]

And in 3. groups_vars/osds.yml:

osd_scenario: collocated
#devices:
#  - /dev/vdb

# or
osd_auto_discovery: true

Running ceph-ansible playbook

Once you've created your inventory file and modified your variables:

ansible-playbook site.yml -i inventory

Once done, you should have a functioning Ceph cluster.

Interacting with Ceph

For these next steps, we will need to SSH to a machine in your inventory with the client role and sudo to root.

Examine cluster status

Most of what you need to know you can see at a glance by using

ceph status

Abnormal statuses will be reported regarding MONs, OSDs, and other daemons/objects

Create an RBD volume

Due to kernel differences, it is safest to create a volume with minimal features. Here we are creating a 1GB volume with layering enabled.

rbd create test --size 1024  --image-feature=layering

Map an RBD volume

Once created, you can map the RBD volume to your client.

rbd map test

Format a volume

Once Mapped, format the volume with your filesystem of choice. Make sure you have the right device per the rbd map output

mkfs.ext4 /dev/rbd0

Mount the volume

mkdir /mnt/test
mount /dev/rbd0 /mnt/test

Delete the volume

umount /dev/rbd0
rbd unmap /dev/rbd0
rbd rm test

Kube

ceph osd pool create kube 128
ceph osd pool set kube size 1
ceph osd crush tunables legacy
ceph auth get-or-create client.kube mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=kube' -o ceph.client.kube.keyring
ceph osd pool application enable kube rbd

pushd storage-class
ssh root@ceph-1 "ceph auth get-key client.kube | base64" | tee client.kube
ssh root@ceph-1 "ceph auth get-key client.admin | base64" | tee client.admin

export client_kube=`cat client.kube`
export client_admin=`cat client.admin`

yasha --client_kube=${client_kube} step1-ceph-secret-user-ns-default.yaml.j2 
yasha --client_kube=$client_kube step2-ceph-secret-user.yaml.j2 
yasha --client_admin=$client_admin step3-ceph-secret-admin.yaml.j2

kubectl apply -f step1-ceph-secret-user-ns-default.yaml 
kubectl apply -f step2-ceph-secret-user.yaml
kubectl apply -f step3-ceph-secret-admin.yaml
kubectl apply -f step4-storage-class.yaml

popd
apiVersion: v1
kind: Secret
metadata:
name: ceph-secret-user
type: "kubernetes.io/rbd"
data:
#Please note this value is base64 encoded.
key: {{client_kube}}
apiVersion: v1
kind: Secret
metadata:
name: ceph-secret-user
namespace: kube-system
type: "kubernetes.io/rbd"
data:
#Please note this value is base64 encoded.
key: {{client_kube}}
apiVersion: v1
kind: Secret
metadata:
name: ceph-secret-admin
namespace: kube-system
type: "kubernetes.io/rbd"
data:
#Please note this value is base64 encoded.
key: {{client_admin}}
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: dynamic
annotations:
storageclass.beta.kubernetes.io/is-default-class: "true"
provisioner: kubernetes.io/rbd
parameters:
monitors: 192.168.16.87:6789,192.168.16.88:6789,192.168.16.89:6789
adminId: admin
adminSecretName: ceph-secret-admin
adminSecretNamespace: kube-system
pool: kube
userId: kube
userSecretName: ceph-secret-user
fsType: xfs
#fsType: fsType that is supported by kubernetes. Default: "ext4".
#fsType: ext4
imageFormat: "2"
imageFeatures: "layering"
reclaimPolicy: Retain
allowVolumeExpansion: true
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment