I came across this helpful gist where it described trying this with an old version of the VCSA (v6.0) using Ansible.
While I am not using ansible directly (yet), I was able to use the steps there as a starting point and adapt the workflow to be able to bring-up VCSA v7.0.3 in libvirt/KVM.. It should work for any 7.x version, but no guarentees if it will work with VCSA v8.x
There were some slight things i had to adapt (namely, the guestfish commands; the filesystem layout changed since this ansible workbook was created in the newer version of the VCSA).
I've verified these steps on my AlmaLinux 8.10 baremetal host.
Commands ran beelow come from the following packages:
- curl-7.61.1-34 (optional, only used for monitoring installation progress)
- jq-1.6-9 (optional, only used for monitoring installation progress)
- libguestfs-tools-c-1.44.0-9
- libvirt-8.0.0-23.2
- libvirt-client-8.0.0-23.2
- podman-4.9.4-12 (optional, only used for monitoring installation progress)
- qemu-img-6.2.0-52
- virt-install-3.2.0-4
Here's the full run-down of shell(bash) commands i'm using (btw, those converted vmdk files, now as qcow2, are in /tmp/vcsa-v7
, along with a disk-info
file i generated from the VMDKs, and a settings.json
file to handle the hands-off installation - their contents also are below):
virsh pool-define-as --name vcsa --type dir --target /var/lib/libvirt/images/vcsa
virsh pool-start --pool vcsa --build
virsh pool-autostart --pool vcsa
for V in $(</tmp/vcsa-v7/disk-info); do
N=${V%%=*};
S=${V##*=};
P=$(echo $V|cut -d= -f2);
echo -n "creating $P disk, size $S megs.. ";
virsh vol-create-as --pool vcsa --format qcow2 --capacity ${S}M vcsa-$N.qcow2 ;
done
See these steps for how to take the VCSA iso, take the embedded OVF and extract the bundled embedded VMDK files.
for F in /tmp/vcsa-v7/*qcow2; do
echo -n "importing ${F##*/}.. ";
virsh vol-upload --pool vcsa --vol vcsa-$(echo $F|cut -d- -f8) --file $F --sparse ;
echo;
done
guestfish add \
$(virsh vol-list --pool vcsa --details | awk '$3 ~ /file/ {print $2; exit}') \
: run \
: mount /dev/vg_root_0/lv_root_0 / \
: mkdir /var/install \
: copy-in /tmp/vcsa-v7/settings.json /var/install
virt-install \
-n vcsa \
--ram 20480 \
--vcpus 4 \
--cpu host-passthrough \
--import $(
for i in $(
seq 1 $(cat /tmp/vcsa-v7/disk-info | wc -l)
); do
echo -n "--disk vol=vcsa/vcsa-disk$i.qcow2,bus=sata ";
done) \
--os-variant linux2022 \
--network bridge:br0,model=e1000e \
--graphics vnc,password=vncpassword \
--noautoconsole
VCSA_FQDN=vcsa.domain.com
VCSA_ROOT_PASSWORD='MWare123!'
ready=0;
echo -n 'Waiting for service port to come online' && \
sleep 15 && \
while [[ $ready -ne 1 ]]; do
nc -vz $VCSA_FQDN 5480 2>/dev/null;
if [[ $? -ne 0 ]]; then
echo -n '.';
sleep 5;
else
ready=1
fi;
done; echo
ready=0; # now let's watch the progress of the installation
MSG=$(curl -s -k -X GET -H "Authorization: Basic $(echo -n "root:$VCSA_ROOT_PASSWORD" | base64)" https://${VCSA_FQDN}:5480/rest/vcenter/deployment | jq -r '.progress.message.default_message' 2>/dev/null) && \
echo -n "Current state: $MSG" && \
while [[ $ready -ne 1 ]]; do
NEW_MSG=$(curl -s -k -X GET -H "Authorization: Basic $(echo -n "root:$VCSA_ROOT_PASSWORD" | base64)" https://${VCSA_FQDN}:5480/rest/vcenter/deployment | jq -r '.progress.message.default_message' 2>/dev/null);
if [[ $NEW_MSG =~ "Task has completed successfully" ]]; then
echo
echo $NEW_MSG;
ready=1;
else if [[ "$NEW_MSG" != "" && "$MSG" != "$NEW_MSG" ]]; then
echo;
echo -n $NEW_MSG;
MSG="$NEW_MSG";
fi;
echo -n "."; fi;
sleep 1;
done
If you don't care about that level of detail, you can just periodicly poll if the API is finally up once everything is done - using govc.. btw, here i run goc in a vmware's official container:
VCSA_SSO_PASSWORD='MWare123!'
alias govc='podman run --rm -e GOVC_INSECURE=1 -e [email protected] -e GOVC_PASSWORD="$VCSA_SSO_PASSWORD" -e GOVC_URL=https://$VCSA_FQDN/sdk docker.io/vmware/govc:v0.42.0 /govc'
ready=0;
echo -n "Waiting for API to come online once installation is complete..." && \
while [[ $ready -ne 1 ]]; do
OUTPUT="$(govc about 2>/dev/null)"
if [[ $? -ne 0 ]]; then
echo -n ".";
sleep 30;
else
echo;
echo "$OUTPUT"
ready=1;
fi;
done # will only exit after trying to reach SDK stops giving errors, e.g. it has come up and the VCSA is done with installation
Finally once all is working fine, you can optionally save some space by getting rid of the 2nd disk, which is just a CD installation media and needlessly stays attached to the VM even after installation.. i'm guessing it shouldn't be needed after VCSA is up and running correctly..
virsh shutdown vcsa
down=0 && \
echo -n 'Waiting for VM to come down' && \
VM_STATE=$(virsh dominfo vcsa | awk '/^State:/ {$1="";print}') && \
while [[ ! $VM_STATE =~ "shut off" ]]; do
echo -n '.';
sleep 2;
VM_STATE=$(virsh dominfo vcsa | awk '/^State:/ {$1="";print}')
done; echo
guestfish add \
$(virsh vol-list --pool vcsa --details | awk '$3 ~ /file/ {print $2; exit}') \
: run \
: mount /dev/vg_root_0/lv_root_0 / \
: command 'sed -i -n '\''/cdrom iso9660/!p'\'' /etc/fstab'
virsh detach-disk \
--domain vcsa \
--target $(virsh vol-list --pool vcsa --details | awk '/disk2.qcow2/ {print $2}')\
--config\
--persistent
virsh vol-delete --pool vcsa --vol vcsa-disk2.qcow2
virsh start vcsa
virsh destroy vcsa # stop VM
virsh undefine vcsa # delete VM
virsh vol-list --pool vcsa --details \
| awk '$3 ~ /file/ {print $1}' \
| xargs -n 1 virsh vol-delete --pool vcsa # delete all disks
virsh pool-destroy --pool vcsa
virsh pool-undefine --pool vcsa
Be sure to use/adjust the below disk-info
and settings.json
files to match your environment, and put them into the /tmp/vcsa-v7 dir referenced above..
(and yes, i'm fully IPv6, no IPv4 - so you may need to adjust this if you need IPv4 connectivity still)
- https://github.com/jeffmcutter/vcsa_on_kvm (first link i found when i came up with the idea, it however lacks unattended install support, but it lead me to find the next link)
- https://gist.github.com/infernix/0377af0bc9012e3d5e5e (where most of this workflow is based on)
- https://blog.devops.dev/setting-a-vmware-vsphere7-home-lab-part5-automatic-installation-of-vcenter-by-ansible-9d3db954c903 (for monitoring installation progress)
- https://vskeeball.com/2024/01/27/installing-vcenter-as-an-azure-vm/ (may be useful in the future if i want to do upgrades as ewll)
Additionally, if you want to watch an upgrade in progress:
Or a backup in progress: