Skip to content

Instantly share code, notes, and snippets.

@frhack
Created May 15, 2025 16:28
Show Gist options
  • Save frhack/617fcda28bebc60138487f951576dd6c to your computer and use it in GitHub Desktop.
Save frhack/617fcda28bebc60138487f951576dd6c to your computer and use it in GitHub Desktop.
Configurazione Cluster HA a 2 nodi con DRBD, Pacemaker, Corosync, LVM e iSCSI
# Configurazione Cluster HA a 2 nodi con DRBD, Pacemaker, Corosync, LVM e iSCSI

## Prerequisiti
- Due server con:
  - 2 interfacce di rete (1 per Proxmox, 1 per DRBD punto-punto)
  - Disco RAID hardware con strip size 128k
  - OS Linux (Debian/Ubuntu)
  - Connessione diretta 10GbE tra nodi

## 1. Configurazione Rete

```bash
# Nodo1
sudo ip addr add 192.168.100.1/30 dev eth1
sudo ip link set eth1 up

# Nodo2
sudo ip addr add 192.168.100.2/30 dev eth1
sudo ip link set eth1 up

# Verifica
ping 192.168.100.2  # Da nodo1 a nodo2

2. Installazione Pacchetti

sudo apt-get update
sudo apt-get install -y drbd-utils pacemaker corosync lvm2 targetcli-fb pcs

3. Configurazione DRBD

File /etc/drbd.d/r0.res:

resource r0 {
  protocol C;
  meta-disk internal;
  device /dev/drbd0;
  disk /dev/sdb1;
  handlers {
    pri-on-incon-degr "/usr/lib/drbd/notify-pri-on-incon-degr.sh; echo b > /proc/sysrq-trigger ; reboot -f";
    pri-lost-after-sb "/usr/lib/drbd/notify-pri-lost-after-sb.sh; echo b > /proc/sysrq-trigger ; reboot -f";
  }
  net {
    cram-hmac-alg "sha1";
    shared-secret "miapasswordsegreta";
  }
  on nodo1 {
    address 192.168.100.1:7788;
  }
  on nodo2 {
    address 192.168.100.2:7788;
  }
}

Inizializzazione DRBD:

sudo drbdadm create-md r0
sudo drbdadm up r0
# Sul nodo primario:
sudo drbdadm primary --force r0

4. Configurazione Pacemaker con PCS

# Autenticazione cluster
sudo pcs cluster auth nodo1 nodo2 -u hacluster -p miapasword

# Setup cluster
sudo pcs cluster setup --name cluster_iscsi nodo1 nodo2
sudo pcs cluster start --all
sudo pcs cluster enable --all

# Configurazione base
sudo pcs property set stonith-enabled=false
sudo pcs property set no-quorum-policy=ignore

5. Risorse Cluster

# Risorse DRBD
sudo pcs resource create p_drbd_r0 ocf:linbit:drbd drbd_resource=r0 op monitor interval=15s
sudo pcs resource master ms_drbd_r0 p_drbd_r0 master-max=1 master-node-max=1 clone-max=2 clone-node-max=1 notify=true

# LVM
sudo pcs resource create p_lvm_iscsi ocf:heartbeat:LVM volgrpname=vg_iscsi op monitor interval=10s timeout=30s

# iSCSI
sudo pcs resource create p_iscsi_target ocf:heartbeat:iSCSITarget iqn="iqn.2023-06.com.example:storage.iscsi" implementation="tgt"
sudo pcs resource create p_iscsi_lun ocf:heartbeat:iSCSILogicalUnit target_iqn="iqn.2023-06.com.example:storage.iscsi" lun=1 path="/dev/vg_iscsi/lv_iscsi"

6. Vincoli Cluster

sudo pcs constraint colocation add p_iscsi_target p_iscsi_lun p_lvm_iscsi with ms_drbd_r0 INFINITY
sudo pcs constraint order promote ms_drbd_r0 then start p_lvm_iscsi
sudo pcs constraint order start p_lvm_iscsi then start p_iscsi_target

7. Configurazione iSCSI Target

sudo targetcli backstores/block create name=iscsi_backend dev=/dev/vg_iscsi/lv_iscsi
sudo targetcli iscsi/ create iqn.2023-06.com.example:storage.iscsi
sudo targetcli saveconfig

Comandi Utili

# Stato cluster
sudo pcs status
sudo drbdadm status

# Failover manuale
sudo pcs resource move ms_drbd_r0 nodo2
sudo pcs resource clear ms_drbd_r0

Note Finali

  1. Sostituire tutti i valori placeholder con i propri dati reali
  2. La sincronizzazione iniziale DRBD può richiedere tempo
  3. Testare il failover in ambiente non produttivo prima
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment