Last update: Tue Jan 14 23:15:49 UTC 2020 by @luckylittle
- Understand, identify, and work with containerization features
- Deploy a preconfigured application and identify crucial features such as namespaces, SELinux labels, and cgroups
| ## /etc/squid/Approved_Sites.txt | |
| # put your vCenter FQDN/address in here too, if OpenShift is creating it's own VMs | |
| # the OpenShift Machine API Operator will use the proxy when creating Worker nodes/VMs | |
| vcenter.example.com | |
| # required for OpenShift installation and samples catalog | |
| # https://docs.openshift.com/container-platform/4.11/installing/install_config/configuring-firewall.html | |
| # https://access.redhat.com/articles/3638561 | |
| .quay.io #allows cdn.quay.io |
| # Aanmaken van een extra / nieuwe master node (master-2) op bais van (master-0): | |
| oc -n openshift-machine-api get machine sbx42-69jrk-master-0 -o json --export| sed -e s/master-0/master-2/g | jq 'del(.metadata.annotations)' | oc -n openshift-machine-api create -f - | |
| # info ophalen over het cluster | |
| oc get infrastructure cluster -o yaml | |
| # delete completed (deployment) pods | |
| oc delete po --field-selector=status.phase==Succeeded | |
| # info over deploymentconfigs met cpu/memory requests & limits |
| # Install OpenShift 3.11 cli | |
| brew install https://raw.githubusercontent.com/cblecker/homebrew-core/d1092419e5113b296a6b1d7ecd2bf6673d39f0a2/Formula/openshift-cli.rb | |
| # ID = /var/lib/origin/openshift.local.volumes/plugins/kubernetes.io/cinder/mounts/[ID] | |
| oc get pv -o json | jq -r '[.items[] | {name:.spec.claimRef.name, namespace:.spec.claimRef.namespace, volumeID:.spec.cinder.volumeID}]' | grep -n2 [ID] | |
| # ElasticSearch get incides | |
| oc -n openshift-logging rsh $(oc -n openshift-logging get po -l component=es -o name) es_util --query=_cat/indices -XCURL |
| #!/bin/bash | |
| # | |
| # This extremely rough nonsense is an attempt to automate the disaster recovery | |
| # expired certs documentation published at | |
| # https://docs.openshift.com/container-platform/4.1/disaster_recovery/scenario-3-expired-certs.html | |
| # ... Which was last reviewed on 2019/06/10 | |
| # | |
| # Please contact [email protected] with suggestions or corrections | |
| # CUSTOMIZE THESE: |
Last update: Tue Jan 14 23:15:49 UTC 2020 by @luckylittle
| <cm:property-placeholder id="myblueprint.placeholder" persistent-id="camel.blueprint" > | |
| <cm:default-properties> | |
| <cm:property name="sleutel" value="env:waarden:geen_waarde" /> | |
| </cm:default-properties> | |
| </cm:property-placeholder> | |
| .... |
| (openshift 4.3) | |
| To configure a webhook_config for Watchdog of Alertmanager we have to adjust the secret alertmanager.yml. | |
| # oc -n openshift-monitoring get secret alertmanager-main --template='{{ index .data "alertmanager.yaml" }}' | base64 -d > alertmanager.yaml | |
| And then change alertmanager.yml into | |
| global: | |
| resolve_timeout: 5m | |
| route: |
Warning: the RKE install method is only supported up to v2.0.8!
This gist describes how to setup Rancher 2 HA, by using self signed certificates (with intermediate) and a Layer 4 Loadbalancer (TCP)
This is a short guide explaining how to deploy and manage custom SNI or "named" certificates via openshift-ansible. These custom certificates will be served for public facing console and API.