Skip to content

Instantly share code, notes, and snippets.

@aleks-mariusz
Last active September 13, 2024 18:49
Show Gist options
  • Save aleks-mariusz/dd76477545bed0fe031230f65772192f to your computer and use it in GitHub Desktop.
Save aleks-mariusz/dd76477545bed0fe031230f65772192f to your computer and use it in GitHub Desktop.
setting up VCSA 7.0.3 (vCenter Server Appliance v7.x) in LibVirt/KVM/QEMU

Running a VCSA 7.0.3 (VCenter Server Appliance v7.x) on KVM/libvirtd - using virsh/virt-install

I came across this helpful gist where it described trying this with an old version of the VCSA (v6.0) using Ansible.

While I am not using ansible directly (yet), I was able to use the steps there as a starting point and adapt the workflow to be able to bring-up VCSA v7.0.3 in libvirt/KVM.. It should work for any 7.x version, but no guarentees if it will work with VCSA v8.x

There were some slight things i had to adapt (namely, the guestfish commands; the filesystem layout changed since this ansible workbook was created in the newer version of the VCSA).

Environment

I've verified these steps on my AlmaLinux 8.10 baremetal host.

Commands ran beelow come from the following packages:

  • curl-7.61.1-34 (optional, only used for monitoring installation progress)
  • jq-1.6-9 (optional, only used for monitoring installation progress)
  • libguestfs-tools-c-1.44.0-9
  • libvirt-8.0.0-23.2
  • libvirt-client-8.0.0-23.2
  • podman-4.9.4-12 (optional, only used for monitoring installation progress)
  • qemu-img-6.2.0-52
  • virt-install-3.2.0-4

Steps

Here's the full run-down of shell(bash) commands i'm using (btw, those converted vmdk files, now as qcow2, are in /tmp/vcsa-v7, along with a disk-info file i generated from the VMDKs, and a settings.json file to handle the hands-off installation - their contents also are below):

set up the storage-pool

virsh pool-define-as --name vcsa --type dir --target /var/lib/libvirt/images/vcsa
virsh pool-start --pool vcsa --build
virsh pool-autostart --pool vcsa

create empty disks

for V in $(</tmp/vcsa-v7/disk-info); do
    N=${V%%=*};
    S=${V##*=};
    P=$(echo $V|cut -d= -f2);
    echo -n "creating $P disk, size $S megs.. ";
    virsh vol-create-as --pool vcsa --format qcow2 --capacity ${S}M vcsa-$N.qcow2 ;
done

import vmdk-converted-to-qcow2 images

See these steps for how to take the VCSA iso, take the embedded OVF and extract the bundled embedded VMDK files.

for F in /tmp/vcsa-v7/*qcow2; do
    echo -n "importing ${F##*/}.. ";
    virsh vol-upload --pool vcsa --vol vcsa-$(echo $F|cut -d- -f8) --file $F --sparse ;
    echo;
done

enable automated installation by injecting settings.json into root's new /var/install dir

guestfish add \
    $(virsh vol-list --pool vcsa --details | awk '$3 ~ /file/ {print $2; exit}') \
    : run \
    : mount /dev/vg_root_0/lv_root_0 / \
    : mkdir /var/install \
    : copy-in /tmp/vcsa-v7/settings.json /var/install

create and boot up VM

virt-install \
    -n vcsa \
    --ram 20480 \
    --vcpus 4 \
    --cpu host-passthrough \
    --import $(
        for i in $(
            seq 1 $(cat /tmp/vcsa-v7/disk-info | wc -l)
        ); do
            echo -n "--disk vol=vcsa/vcsa-disk$i.qcow2,bus=sata ";
        done) \
    --os-variant linux2022 \
    --network bridge:br0,model=e1000e \
    --graphics vnc,password=vncpassword \
    --noautoconsole

Optionally, waiting for install to complete (~20-30 minutes)

you can either poll the service-port (5480).....

first let's wait until it starts responding:

VCSA_FQDN=vcsa.domain.com
VCSA_ROOT_PASSWORD='MWare123!'
ready=0;
echo -n 'Waiting for service port to come online' && \
sleep 15 && \
while [[ $ready -ne 1 ]]; do 
    nc -vz $VCSA_FQDN 5480 2>/dev/null; 
    if [[ $? -ne 0 ]]; then
        echo -n '.';
        sleep 5;
    else
        ready=1
    fi;
done; echo

... then we can watch the progress of the installation

ready=0; # now let's watch the progress of the installation
MSG=$(curl -s -k -X GET -H "Authorization: Basic $(echo -n "root:$VCSA_ROOT_PASSWORD" | base64)" https://${VCSA_FQDN}:5480/rest/vcenter/deployment | jq -r '.progress.message.default_message' 2>/dev/null) && \
echo -n "Current state: $MSG" && \
while [[ $ready -ne 1 ]]; do 
    NEW_MSG=$(curl -s -k -X GET -H "Authorization: Basic $(echo -n "root:$VCSA_ROOT_PASSWORD" | base64)" https://${VCSA_FQDN}:5480/rest/vcenter/deployment | jq -r '.progress.message.default_message' 2>/dev/null); 
    if [[ $NEW_MSG =~ "Task has completed successfully" ]]; then 
        echo
        echo $NEW_MSG;
        ready=1;
    else if [[ "$NEW_MSG" != "" && "$MSG" != "$NEW_MSG" ]]; then 
        echo;
        echo -n $NEW_MSG;
        MSG="$NEW_MSG";
    fi;
    echo -n "."; fi;
    sleep 1;
done

...or waiting for the API to finally come up

If you don't care about that level of detail, you can just periodicly poll if the API is finally up once everything is done - using govc.. btw, here i run goc in a vmware's official container:

VCSA_SSO_PASSWORD='MWare123!'
alias govc='podman run --rm -e GOVC_INSECURE=1 -e [email protected] -e GOVC_PASSWORD="$VCSA_SSO_PASSWORD" -e GOVC_URL=https://$VCSA_FQDN/sdk docker.io/vmware/govc:v0.42.0 /govc'
ready=0;
echo -n "Waiting for API to come online once installation is complete..." && \
while [[ $ready -ne 1 ]]; do 
    OUTPUT="$(govc about 2>/dev/null)"
    if [[ $? -ne 0 ]]; then 
        echo -n ".";
        sleep 30;
    else
        echo;
        echo "$OUTPUT"
        ready=1;
    fi;        
done # will only exit after trying to reach SDK stops giving errors, e.g. it has come up and the VCSA is done with installation

Optionally, save some space

Finally once all is working fine, you can optionally save some space by getting rid of the 2nd disk, which is just a CD installation media and needlessly stays attached to the VM even after installation.. i'm guessing it shouldn't be needed after VCSA is up and running correctly..

shut the VM down

virsh shutdown vcsa
down=0 && \
echo -n 'Waiting for VM to come down' && \
VM_STATE=$(virsh dominfo vcsa | awk '/^State:/ {$1="";print}') && \
while [[ ! $VM_STATE =~ "shut off" ]]; do
    echo -n '.';
    sleep 2;
    VM_STATE=$(virsh dominfo vcsa | awk '/^State:/ {$1="";print}')
done; echo

then remove the disk from being referenced

guestfish add \
    $(virsh vol-list --pool vcsa --details | awk '$3 ~ /file/ {print $2; exit}') \
    : run \
    : mount /dev/vg_root_0/lv_root_0 / \
    : command 'sed -i -n '\''/cdrom iso9660/!p'\'' /etc/fstab'

detach the disk

virsh detach-disk \
    --domain vcsa \
    --target $(virsh vol-list --pool vcsa --details | awk '/disk2.qcow2/ {print $2}')\
    --config\
    --persistent

and delete the disk-image

virsh vol-delete --pool vcsa --vol vcsa-disk2.qcow2

verify vcsa comes back up:

virsh start vcsa

And if you need to cleanup the VM and its disks

bye-bye vcsa...

virsh destroy vcsa # stop VM
virsh undefine vcsa # delete VM 
virsh vol-list --pool vcsa --details \
    | awk '$3 ~ /file/ {print $1}' \
    | xargs -n 1 virsh vol-delete --pool vcsa # delete all disks
virsh pool-destroy --pool vcsa
virsh pool-undefine --pool vcsa

Additional files

Be sure to use/adjust the below disk-info and settings.json files to match your environment, and put them into the /tmp/vcsa-v7 dir referenced above..

(and yes, i'm fully IPv6, no IPv4 - so you may need to adjust this if you need IPv4 connectivity still)

Helpful links:

disk1=vg_root_0=49728
disk2=install_media=5873
disk3=swap_vg=25600
disk4=core_vg=25600
disk5=log_vg=10240
disk6=db_vg=10240
disk7=dblog_vg=15360
disk8=seat_vg=10240
disk9=netdump_vg=1024
disk10=autodeploy_vg=10240
disk11=imagebuilder_vg=10240
disk12=updatemgr_vg=102400
disk13=archive_vg=51200
disk14=vtsdb_vg=10240
disk15=vtsdblog_vg=5120
disk16=lifecycle_vg=102400
disk17=vg_lvm_snapshot=153600
{
"appliance.net.addr.family": "ipv6",
"appliance.net.mode": "autoconf",
"appliance.net.gateway": "default",
"appliance.net.dns.servers": "2606:4700:4700::1111,2606:4700:4700::1001",
"appliance.ntp.servers": "time.cloudflare.com",
"appliance.net.pnid": "vcsa.domain.com",
"deployment.autoconfig": "True",
"deployment.node.type": "embedded",
"appliance.ssh.enabled": "true",
"appliance.root.shell": "True",
"appliance.root.passwd": "MWare123!",
"ceip_enabled": "False",
"hadcs.enabled": "False",
"vmdir.domain-name": "vsphere.local",
"vmdir.username": "[email protected]",
"vmdir.password": "MWare123!"
}
@aleks-mariusz
Copy link
Author

Additionally, if you want to watch an upgrade in progress:

watch -n 2 -d 'curl -s -k -X GET -H "Authorization: Basic '$(echo -n "root:$VCSA_ROOT_PASSWORD" | base64)'" https://'${VCSA_FQDN}':5480/rest/appliance/update | jq -r "del(.task.subtasks)"'

Or a backup in progress:

watch -n 2 -d 'curl -s -k -X GET -H "Authorization: Basic '$(echo -n "root:$VCSA_ROOT_PASSWORD" | base64)'" https://'${VCSA_FQDN}':5480/rest/appliance/recovery/backup/job/detailsjq -r ".value[-1].value"'

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment