How to deploy Ceph in Ubuntu 22.04
-
Diogo Monteiro
- 29 Jun, 2024

Given the technological advancements, data traffic has become more abundant and valuable. Considering this, along with the numerous benefits of Distributed Systems (DS), Ceph has emerged as a reliable and scalable tool to enhance data redundancy and storage within your local infrastructure.
Before continue, you need to consider some characteristics of our deployment. We will use 3 machines with one disk without formatation (in our environment in /dev/vdb).
Basic configuration (all nodes)
Create user cluster-admin
.
sudo adduser cluster-admin
Add user to sudo
group.
sudo usermod -aG sudo cluster-admin
To allow cluster-admin to use privilege commands without the necessity to use password, change the sudoers file adding cluster-admin ALL = (root) NOPASSWD:ALL
one line before @includedir /etc/sudoers.d
.
sudo vim /etc/sudoers
In /etc/hosts/
add the follow lines, adapting each IP for your case.
192.168.123.110 ceph-mon
192.168.123.111 ceph-osd1
192.168.123.112 ceph-osd2
Install docker.io.
sudo apt update && sudo apt upgrade
sudo apt install -y docker.io
To generate a public ssh key (without pass-key):
ssh-keygen -t rsa -b 4096 -C "ceph-mon"
To copy ssh keys to the ceph-osds:
ssh-copy-id ceph-osd1
ssh-copy-id ceph-osd2
Configuring Ceph in the master (ceph-mon)
Firstly, download cephadm
mkdir bin
cd bin
curl --silent --remote-name --location https://github.com/ceph/ceph/raw/pacific/src/cephadm/cephadm
chmod u+x cephadm
cd ..
Now, we need to choose which version of cephadm
we will use, at this tutorial we will use Pacific 16.2.10. For more verions consult ceph documentation
sudo ./bin/cephadm add-repo --release 16.2.10
Install ceph-common
.
sudo apt install -y ceph-common
Add your first monitor. At bootstrap_output.txt
there will be some metrics about Ceph Dashboard.
sudo ./bin/cephadm bootstrap --mon-ip 192.168.123.110 &> bootstrap_output.txt
To verify Ceph status:
sudo ceph status
When cluster status is HEALTH_WARN
in some cases, it is because the current number of OSDs is lesser than the default replicas pool size. Considering that we will have just two, we need to change replicas size to two.
sudo ceph config set global osd_pool_default_size 2
To verify if it was correctly changed:
sudo ceph config get osd osd_pool_default_size
To maintain the number of monitor and manager fixed to one:
sudo ceph orch apply mon --placement="1 experiment-ceph-mon"
sudo ceph orch apply mgr --placement="1 experiment-ceph-mon"
Configuring Ceph in the clients (ceph-osds)
Install ceph-common
.
sudo apt install -y ceph-common
After properly installing Ceph, we need to copy some config files from master node.
sudo scp cluster-admin@ceph-mon:/etc/ceph/ceph.conf /etc/ceph/ceph.conf
sudo scp cluster-admin@ceph-mon:/etc/ceph/ceph.client.admin.keyring /etc/ceph/ceph.client.admin.keyring
Now, follow with additional commands to a complete configuration.
Additional commands
Add nodes
Execute the follow commands at ceph-mon
.
sudo ceph orch host add cluster-admin ceph-osd1 --labels=osd
sudo ceph orch host add cluster-admin ceph-osd2 --labels=osd
To verify if they were properly added to the cluster:
sudo ceph orch host ls
Add OSD units
Execute the follow commands at ceph-mon
.
sudo ceph orch device ls
sudo ceph orch daemon add osd ceph-osd1:/dev/vdb
sudo ceph orch daemon add osd ceph-osd2:/dev/vdb
Mount RBD images at ceph-mon
Create a pool called main-block-devices
.
sudo ceph osd pool create main-block-devices
Verify if pool was properly created.
sudo ceph osd pool ls
Start the pool.
sudo rbd pool init main-block-devices
Create a rbd called foo
.
sudo rbd create main-block-devices/foo --size 1024
Verify if rbd was properly created.
sudo rbd -p main-block-devices ls
Mount RBD images at ceph-osds
Execute the follow commands at your nodes.
sudo rbd map foo --name client.admin -m ceph-mon -k /etc/ceph/ceph.client.admin.keyring -p main-block-devices
sudo mkfs.ext4 /dev/rbd/main-block-devices/foo
sudo mkdir /mnt/ceph-rbd
sudo mount /dev/rbd/main-block-devices/foo /mnt/ceph-rbd
To verify if disk is mounted:
lsblk