Skip to content

Instantly share code, notes, and snippets.

@varac
Created January 15, 2021 11:35
Show Gist options
  • Save varac/81634b8e1d620af410a07e6ce26241f7 to your computer and use it in GitHub Desktop.
Save varac/81634b8e1d620af410a07e6ce26241f7 to your computer and use it in GitHub Desktop.
otc tf idempotency issue
$ for i in cluster/default modules/*/*; do echo $i; terraform init -upgrade $i; done || /bin/true
cluster/default
Upgrading modules...
- cce in modules/infrastructure/cce
- dns in modules/infrastructure/dns
- ssh in modules/infrastructure/ssh
- vpc in modules/infrastructure/vpc
Initializing the backend...
Initializing provider plugins...
- Finding opentelekomcloud/opentelekomcloud versions matching "1.22.3"...
- Finding hashicorp/template versions matching "2.2.0"...
- Installing opentelekomcloud/opentelekomcloud v1.22.3...
- Installed opentelekomcloud/opentelekomcloud v1.22.3 (self-signed, key ID 3EDA0171114F71DF)
- Installing hashicorp/template v2.2.0...
- Installed hashicorp/template v2.2.0 (signed by HashiCorp)
Partner and community providers are signed by their developers.
If you'd like to know more about provider signing, you can read about it here:
https://www.terraform.io/docs/plugins/signing.html
Terraform has created a lock file .terraform.lock.hcl to record the provider
selections it made above. Include this file in your version control repository
so that Terraform can guarantee to make the same selections by default when
you run "terraform init" in the future.
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
modules/infrastructure/cce
Initializing the backend...
Initializing provider plugins...
- Finding opentelekomcloud/opentelekomcloud versions matching "1.22.3"...
- Using opentelekomcloud/opentelekomcloud v1.22.3 from the shared cache directory
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
modules/infrastructure/dns
Initializing the backend...
Initializing provider plugins...
- Finding opentelekomcloud/opentelekomcloud versions matching "1.22.3"...
- Using opentelekomcloud/opentelekomcloud v1.22.3 from the shared cache directory
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
modules/infrastructure/project
Initializing the backend...
Initializing provider plugins...
- Finding opentelekomcloud/opentelekomcloud versions matching "1.22.3"...
- Using opentelekomcloud/opentelekomcloud v1.22.3 from the shared cache directory
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
modules/infrastructure/ssh
Initializing the backend...
Initializing provider plugins...
- Finding opentelekomcloud/opentelekomcloud versions matching "1.22.3"...
- Using opentelekomcloud/opentelekomcloud v1.22.3 from the shared cache directory
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
modules/infrastructure/vpc
Initializing the backend...
Initializing provider plugins...
- Finding opentelekomcloud/opentelekomcloud versions matching "1.22.3"...
- Using opentelekomcloud/opentelekomcloud v1.22.3 from the shared cache directory
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
modules/kubernetes/consul
Initializing the backend...
Initializing provider plugins...
- Finding hashicorp/helm versions matching "1.3.2"...
- Installing hashicorp/helm v1.3.2...
- Installed hashicorp/helm v1.3.2 (signed by HashiCorp)
Terraform has made some changes to the provider dependency selections recorded
in the .terraform.lock.hcl file. Review those changes and commit them to your
version control system if they represent changes you intended to make.
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
modules/kubernetes/echo
Initializing the backend...
Initializing provider plugins...
- Finding hashicorp/template versions matching "2.2.0"...
- Finding hashicorp/helm versions matching "1.3.2"...
- Using hashicorp/template v2.2.0 from the shared cache directory
- Using hashicorp/helm v1.3.2 from the shared cache directory
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
modules/kubernetes/kubeview
Initializing the backend...
Initializing provider plugins...
- Finding hashicorp/helm versions matching "1.3.2"...
- Finding hashicorp/template versions matching "2.2.0"...
- Using hashicorp/helm v1.3.2 from the shared cache directory
- Using hashicorp/template v2.2.0 from the shared cache directory
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
modules/kubernetes/namespaces
Initializing the backend...
Initializing provider plugins...
- Finding hashicorp/kubernetes versions matching "1.13.3"...
- Installing hashicorp/kubernetes v1.13.3...
- Installed hashicorp/kubernetes v1.13.3 (signed by HashiCorp)
Terraform has made some changes to the provider dependency selections recorded
in the .terraform.lock.hcl file. Review those changes and commit them to your
version control system if they represent changes you intended to make.
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
modules/kubernetes/storage-class
Initializing the backend...
Initializing provider plugins...
- Finding hashicorp/kubernetes versions matching "1.13.3"...
- Using hashicorp/kubernetes v1.13.3 from the shared cache directory
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
modules/kubernetes/traefik
Initializing the backend...
Initializing provider plugins...
- Finding hashicorp/template versions matching "2.2.0"...
- Finding opentelekomcloud/opentelekomcloud versions matching "1.22.3"...
- Finding hashicorp/helm versions matching "1.3.2"...
- Finding hashicorp/kubernetes versions matching "1.13.3"...
- Using hashicorp/template v2.2.0 from the shared cache directory
- Using opentelekomcloud/opentelekomcloud v1.22.3 from the shared cache directory
- Using hashicorp/helm v1.3.2 from the shared cache directory
- Using hashicorp/kubernetes v1.13.3 from the shared cache directory
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
$ cd environments/dev/
$ terraform init
Initializing modules...
- cluster in ../../cluster/default
- cluster.cce in ../../modules/infrastructure/cce
- cluster.dns in ../../modules/infrastructure/dns
- cluster.ssh in ../../modules/infrastructure/ssh
- cluster.vpc in ../../modules/infrastructure/vpc
- consul in ../../modules/kubernetes/consul
- echo in ../../modules/kubernetes/echo
- kubeview in ../../modules/kubernetes/kubeview
- namespaces in ../../modules/kubernetes/namespaces
- storage_class in ../../modules/kubernetes/storage-class
- traefik in ../../modules/kubernetes/traefik
Initializing the backend...
Successfully configured the backend "remote"! Terraform will automatically
use this backend unless the backend configuration changes.
Initializing provider plugins...
- Finding hashicorp/helm versions matching "1.3.2"...
- Finding opentelekomcloud/opentelekomcloud versions matching "1.22.3"...
- Finding hashicorp/template versions matching "2.2.0"...
- Finding hashicorp/kubernetes versions matching "1.13.3"...
- Using hashicorp/helm v1.3.2 from the shared cache directory
- Using opentelekomcloud/opentelekomcloud v1.22.3 from the shared cache directory
- Using hashicorp/template v2.2.0 from the shared cache directory
- Using hashicorp/kubernetes v1.13.3 from the shared cache directory
Terraform has created a lock file .terraform.lock.hcl to record the provider
selections it made above. Include this file in your version control repository
so that Terraform can guarantee to make the same selections by default when
you run "terraform init" in the future.
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
$ terraform plan --out=tf-plan.zip
module.cluster.module.vpc.opentelekomcloud_networking_floatingip_v2.traefik: Refreshing state... [id=c4a49b62-45e1-44bd-b157-178b2638c329]
module.cluster.module.vpc.opentelekomcloud_vpc_eip_v1.cluster_eip_address: Refreshing state... [id=a2c92026-cd55-4eaa-a0ad-49d4f887f1f5]
module.cluster.module.vpc.opentelekomcloud_networking_floatingip_v2.nat["eu-de-03"]: Refreshing state... [id=5af4715a-aa30-4ebc-92ca-7749ed63348b]
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
+ create
<= read (data resources)
Terraform will perform the following actions:
# module.cluster.data.template_file.kubeconfig will be read during apply
# (config refers to values not yet known)
<= data "template_file" "kubeconfig" {
+ id = (known after apply)
+ rendered = (known after apply)
+ template = <<-EOT
apiVersion: v1
kind: Config
current-context: default
contexts:
- name: default
context:
cluster: default
user: ${user_name}
clusters:
- name: default
cluster:
server: https://${cluster_addr}:5443
certificate-authority-data: ${cluster_ca}
users:
- name: ${user_name}
user:
client-certificate-data: ${client_cert}
client-key-data: ${client_cert_key}
EOT
+ vars = {
+ "client_cert" = (known after apply)
+ "client_cert_key" = (known after apply)
+ "cluster_addr" = "80.158.6.43"
+ "cluster_ca" = (known after apply)
+ "user_name" = (known after apply)
}
}
# module.consul.helm_release.consul will be created
+ resource "helm_release" "consul" {
+ atomic = false
+ chart = "consul"
+ cleanup_on_fail = false
+ create_namespace = false
+ dependency_update = false
+ disable_crd_hooks = false
+ disable_openapi_validation = false
+ disable_webhooks = false
+ force_update = false
+ id = (known after apply)
+ lint = false
+ max_history = 0
+ metadata = (known after apply)
+ name = "consul"
+ namespace = "consul"
+ recreate_pods = false
+ render_subchart_notes = true
+ replace = false
+ repository = "https://helm.releases.hashicorp.com"
+ reset_values = false
+ reuse_values = false
+ skip_crds = false
+ status = "deployed"
+ timeout = 300
+ verify = false
+ version = "0.26.0"
+ wait = true
}
# module.echo.helm_release.echo will be created
+ resource "helm_release" "echo" {
+ atomic = false
+ chart = "echo-server"
+ cleanup_on_fail = false
+ create_namespace = false
+ dependency_update = false
+ disable_crd_hooks = false
+ disable_openapi_validation = false
+ disable_webhooks = false
+ force_update = false
+ id = (known after apply)
+ lint = false
+ max_history = 0
+ metadata = (known after apply)
+ name = "echo"
+ namespace = "echo"
+ recreate_pods = false
+ render_subchart_notes = true
+ replace = false
+ repository = "https://ealenn.github.io/charts"
+ reset_values = false
+ reuse_values = false
+ skip_crds = false
+ status = "deployed"
+ timeout = 300
+ values = [
+ <<-EOT
ingress:
enabled: true
hosts:
- host: echo.dev.coyotest.com
paths:
- /
annotations: {
kubernetes.io/ingress.class: "traefik",
traefik.ingress.kubernetes.io/router.tls: "true",
traefik.ingress.kubernetes.io/router.tls.certresolver: "letsencrypt"
}
application:
enable:
environment: false
EOT,
]
+ verify = false
+ version = "0.3.0"
+ wait = true
}
# module.kubeview.helm_release.kubeview will be created
+ resource "helm_release" "kubeview" {
+ atomic = false
+ chart = "kubeview"
+ cleanup_on_fail = false
+ create_namespace = false
+ dependency_update = false
+ disable_crd_hooks = false
+ disable_openapi_validation = false
+ disable_webhooks = false
+ force_update = false
+ id = (known after apply)
+ lint = false
+ max_history = 0
+ metadata = (known after apply)
+ name = "kubeview"
+ namespace = "kubeview"
+ recreate_pods = false
+ render_subchart_notes = true
+ replace = false
+ repository = "https://benc-uk.github.io/kubeview/charts"
+ reset_values = false
+ reuse_values = false
+ skip_crds = false
+ status = "deployed"
+ timeout = 300
+ values = [
+ <<-EOT
ingress:
enabled: true
EOT,
]
+ verify = false
+ version = "0.1.17"
+ wait = true
}
# module.namespaces.kubernetes_namespace.namespace["consul"] will be created
+ resource "kubernetes_namespace" "namespace" {
+ id = (known after apply)
+ metadata {
+ generation = (known after apply)
+ name = "consul"
+ resource_version = (known after apply)
+ self_link = (known after apply)
+ uid = (known after apply)
}
}
# module.namespaces.kubernetes_namespace.namespace["echo"] will be created
+ resource "kubernetes_namespace" "namespace" {
+ id = (known after apply)
+ metadata {
+ generation = (known after apply)
+ name = "echo"
+ resource_version = (known after apply)
+ self_link = (known after apply)
+ uid = (known after apply)
}
}
# module.namespaces.kubernetes_namespace.namespace["kubeview"] will be created
+ resource "kubernetes_namespace" "namespace" {
+ id = (known after apply)
+ metadata {
+ generation = (known after apply)
+ name = "kubeview"
+ resource_version = (known after apply)
+ self_link = (known after apply)
+ uid = (known after apply)
}
}
# module.namespaces.kubernetes_namespace.namespace["traefik"] will be created
+ resource "kubernetes_namespace" "namespace" {
+ id = (known after apply)
+ metadata {
+ generation = (known after apply)
+ name = "traefik"
+ resource_version = (known after apply)
+ self_link = (known after apply)
+ uid = (known after apply)
}
}
# module.storage_class.kubernetes_storage_class.default will be created
+ resource "kubernetes_storage_class" "default" {
+ allow_volume_expansion = true
+ id = (known after apply)
+ parameters = {
+ "csi.storage.k8s.io/csi-driver-name" = "disk.csi.everest.io"
+ "csi.storage.k8s.io/fstype" = "ext4"
+ "everest.io/disk-volume-type" = "SATA"
+ "everest.io/passthrough" = "true"
}
+ reclaim_policy = "Delete"
+ storage_provisioner = "everest-csi-provisioner"
+ volume_binding_mode = "Immediate"
+ metadata {
+ annotations = {
+ "storageclass.kubernetes.io/is-default-class" = "true"
}
+ generation = (known after apply)
+ name = "csi-disk-default"
+ resource_version = (known after apply)
+ self_link = (known after apply)
+ uid = (known after apply)
}
}
# module.traefik.helm_release.traefik will be created
+ resource "helm_release" "traefik" {
+ atomic = false
+ chart = "traefik"
+ cleanup_on_fail = false
+ create_namespace = false
+ dependency_update = false
+ disable_crd_hooks = false
+ disable_openapi_validation = false
+ disable_webhooks = false
+ force_update = false
+ id = (known after apply)
+ lint = false
+ max_history = 0
+ metadata = (known after apply)
+ name = "traefik"
+ namespace = "traefik"
+ recreate_pods = false
+ render_subchart_notes = true
+ replace = false
+ repository = "https://helm.traefik.io/traefik"
+ reset_values = false
+ reuse_values = false
+ skip_crds = false
+ status = "deployed"
+ timeout = 300
+ values = (known after apply)
+ verify = false
+ version = "9.11.0"
+ wait = true
}
# module.cluster.module.cce.opentelekomcloud_cce_cluster_v3.cluster will be created
+ resource "opentelekomcloud_cce_cluster_v3" "cluster" {
+ authentication_mode = "rbac"
+ billing_mode = (known after apply)
+ certificate_clusters = (known after apply)
+ certificate_users = (known after apply)
+ cluster_type = "VirtualMachine"
+ cluster_version = "v1.17.9-r0"
+ container_network_cidr = (known after apply)
+ container_network_type = "overlay_l2"
+ description = (known after apply)
+ eip = "80.158.6.43"
+ external = (known after apply)
+ external_otc = (known after apply)
+ flavor_id = "cce.s2.small"
+ highway_subnet_id = (known after apply)
+ id = (known after apply)
+ internal = (known after apply)
+ kube_proxy_mode = (known after apply)
+ kubernetes_svc_ip_range = (known after apply)
+ multi_az = true
+ name = "dev-cluster"
+ region = (known after apply)
+ status = (known after apply)
+ subnet_id = (known after apply)
+ vpc_id = (known after apply)
}
# module.cluster.module.cce.opentelekomcloud_cce_node_v3.az_1[0] will be created
+ resource "opentelekomcloud_cce_node_v3" "az_1" {
+ availability_zone = "eu-de-01"
+ bandwidth_charge_mode = (known after apply)
+ billing_mode = (known after apply)
+ cluster_id = (known after apply)
+ ecs_performance_type = (known after apply)
+ eip_count = 0
+ extend_param_charging_mode = (known after apply)
+ flavor_id = "s2.large.2"
+ id = (known after apply)
+ iptype = (known after apply)
+ key_pair = "dev-ssh-keypair"
+ max_pods = (known after apply)
+ name = "az-1-node-0"
+ order_id = (known after apply)
+ os = (known after apply)
+ private_ip = (known after apply)
+ product_id = (known after apply)
+ public_ip = (known after apply)
+ public_key = (known after apply)
+ region = (known after apply)
+ server_id = (known after apply)
+ sharetype = (known after apply)
+ status = (known after apply)
+ data_volumes {
+ size = 100
+ volumetype = "SATA"
}
+ root_volume {
+ size = 40
+ volumetype = "SATA"
}
+ timeouts {
+ create = "30m"
+ delete = "30m"
}
}
# module.cluster.module.cce.opentelekomcloud_cce_node_v3.az_1[1] will be created
+ resource "opentelekomcloud_cce_node_v3" "az_1" {
+ availability_zone = "eu-de-01"
+ bandwidth_charge_mode = (known after apply)
+ billing_mode = (known after apply)
+ cluster_id = (known after apply)
+ ecs_performance_type = (known after apply)
+ eip_count = 0
+ extend_param_charging_mode = (known after apply)
+ flavor_id = "s2.large.2"
+ id = (known after apply)
+ iptype = (known after apply)
+ key_pair = "dev-ssh-keypair"
+ max_pods = (known after apply)
+ name = "az-1-node-1"
+ order_id = (known after apply)
+ os = (known after apply)
+ private_ip = (known after apply)
+ product_id = (known after apply)
+ public_ip = (known after apply)
+ public_key = (known after apply)
+ region = (known after apply)
+ server_id = (known after apply)
+ sharetype = (known after apply)
+ status = (known after apply)
+ data_volumes {
+ size = 100
+ volumetype = "SATA"
}
+ root_volume {
+ size = 40
+ volumetype = "SATA"
}
+ timeouts {
+ create = "30m"
+ delete = "30m"
}
}
# module.cluster.module.cce.opentelekomcloud_cce_node_v3.az_1[2] will be created
+ resource "opentelekomcloud_cce_node_v3" "az_1" {
+ availability_zone = "eu-de-01"
+ bandwidth_charge_mode = (known after apply)
+ billing_mode = (known after apply)
+ cluster_id = (known after apply)
+ ecs_performance_type = (known after apply)
+ eip_count = 0
+ extend_param_charging_mode = (known after apply)
+ flavor_id = "s2.large.2"
+ id = (known after apply)
+ iptype = (known after apply)
+ key_pair = "dev-ssh-keypair"
+ max_pods = (known after apply)
+ name = "az-1-node-2"
+ order_id = (known after apply)
+ os = (known after apply)
+ private_ip = (known after apply)
+ product_id = (known after apply)
+ public_ip = (known after apply)
+ public_key = (known after apply)
+ region = (known after apply)
+ server_id = (known after apply)
+ sharetype = (known after apply)
+ status = (known after apply)
+ data_volumes {
+ size = 100
+ volumetype = "SATA"
}
+ root_volume {
+ size = 40
+ volumetype = "SATA"
}
+ timeouts {
+ create = "30m"
+ delete = "30m"
}
}
# module.cluster.module.cce.opentelekomcloud_cce_node_v3.az_2[0] will be created
+ resource "opentelekomcloud_cce_node_v3" "az_2" {
+ availability_zone = "eu-de-02"
+ bandwidth_charge_mode = (known after apply)
+ billing_mode = (known after apply)
+ cluster_id = (known after apply)
+ ecs_performance_type = (known after apply)
+ eip_count = 0
+ extend_param_charging_mode = (known after apply)
+ flavor_id = "s2.large.2"
+ id = (known after apply)
+ iptype = (known after apply)
+ key_pair = "dev-ssh-keypair"
+ max_pods = (known after apply)
+ name = "az-2-node-0"
+ order_id = (known after apply)
+ os = (known after apply)
+ private_ip = (known after apply)
+ product_id = (known after apply)
+ public_ip = (known after apply)
+ public_key = (known after apply)
+ region = (known after apply)
+ server_id = (known after apply)
+ sharetype = (known after apply)
+ status = (known after apply)
+ data_volumes {
+ size = 100
+ volumetype = "SATA"
}
+ root_volume {
+ size = 40
+ volumetype = "SATA"
}
+ timeouts {
+ create = "30m"
+ delete = "30m"
}
}
# module.cluster.module.cce.opentelekomcloud_cce_node_v3.az_2[1] will be created
+ resource "opentelekomcloud_cce_node_v3" "az_2" {
+ availability_zone = "eu-de-02"
+ bandwidth_charge_mode = (known after apply)
+ billing_mode = (known after apply)
+ cluster_id = (known after apply)
+ ecs_performance_type = (known after apply)
+ eip_count = 0
+ extend_param_charging_mode = (known after apply)
+ flavor_id = "s2.large.2"
+ id = (known after apply)
+ iptype = (known after apply)
+ key_pair = "dev-ssh-keypair"
+ max_pods = (known after apply)
+ name = "az-2-node-1"
+ order_id = (known after apply)
+ os = (known after apply)
+ private_ip = (known after apply)
+ product_id = (known after apply)
+ public_ip = (known after apply)
+ public_key = (known after apply)
+ region = (known after apply)
+ server_id = (known after apply)
+ sharetype = (known after apply)
+ status = (known after apply)
+ data_volumes {
+ size = 100
+ volumetype = "SATA"
}
+ root_volume {
+ size = 40
+ volumetype = "SATA"
}
+ timeouts {
+ create = "30m"
+ delete = "30m"
}
}
# module.cluster.module.cce.opentelekomcloud_cce_node_v3.az_2[2] will be created
+ resource "opentelekomcloud_cce_node_v3" "az_2" {
+ availability_zone = "eu-de-02"
+ bandwidth_charge_mode = (known after apply)
+ billing_mode = (known after apply)
+ cluster_id = (known after apply)
+ ecs_performance_type = (known after apply)
+ eip_count = 0
+ extend_param_charging_mode = (known after apply)
+ flavor_id = "s2.large.2"
+ id = (known after apply)
+ iptype = (known after apply)
+ key_pair = "dev-ssh-keypair"
+ max_pods = (known after apply)
+ name = "az-2-node-2"
+ order_id = (known after apply)
+ os = (known after apply)
+ private_ip = (known after apply)
+ product_id = (known after apply)
+ public_ip = (known after apply)
+ public_key = (known after apply)
+ region = (known after apply)
+ server_id = (known after apply)
+ sharetype = (known after apply)
+ status = (known after apply)
+ data_volumes {
+ size = 100
+ volumetype = "SATA"
}
+ root_volume {
+ size = 40
+ volumetype = "SATA"
}
+ timeouts {
+ create = "30m"
+ delete = "30m"
}
}
# module.cluster.module.cce.opentelekomcloud_cce_node_v3.az_3[0] will be created
+ resource "opentelekomcloud_cce_node_v3" "az_3" {
+ availability_zone = "eu-de-03"
+ bandwidth_charge_mode = (known after apply)
+ billing_mode = (known after apply)
+ cluster_id = (known after apply)
+ ecs_performance_type = (known after apply)
+ eip_count = 0
+ extend_param_charging_mode = (known after apply)
+ flavor_id = "s2.large.2"
+ id = (known after apply)
+ iptype = (known after apply)
+ key_pair = "dev-ssh-keypair"
+ max_pods = (known after apply)
+ name = "az-3-node-0"
+ order_id = (known after apply)
+ os = (known after apply)
+ private_ip = (known after apply)
+ product_id = (known after apply)
+ public_ip = (known after apply)
+ public_key = (known after apply)
+ region = (known after apply)
+ server_id = (known after apply)
+ sharetype = (known after apply)
+ status = (known after apply)
+ data_volumes {
+ size = 100
+ volumetype = "SATA"
}
+ root_volume {
+ size = 40
+ volumetype = "SATA"
}
+ timeouts {
+ create = "30m"
+ delete = "30m"
}
}
# module.cluster.module.cce.opentelekomcloud_cce_node_v3.az_3[1] will be created
+ resource "opentelekomcloud_cce_node_v3" "az_3" {
+ availability_zone = "eu-de-03"
+ bandwidth_charge_mode = (known after apply)
+ billing_mode = (known after apply)
+ cluster_id = (known after apply)
+ ecs_performance_type = (known after apply)
+ eip_count = 0
+ extend_param_charging_mode = (known after apply)
+ flavor_id = "s2.large.2"
+ id = (known after apply)
+ iptype = (known after apply)
+ key_pair = "dev-ssh-keypair"
+ max_pods = (known after apply)
+ name = "az-3-node-1"
+ order_id = (known after apply)
+ os = (known after apply)
+ private_ip = (known after apply)
+ product_id = (known after apply)
+ public_ip = (known after apply)
+ public_key = (known after apply)
+ region = (known after apply)
+ server_id = (known after apply)
+ sharetype = (known after apply)
+ status = (known after apply)
+ data_volumes {
+ size = 100
+ volumetype = "SATA"
}
+ root_volume {
+ size = 40
+ volumetype = "SATA"
}
+ timeouts {
+ create = "30m"
+ delete = "30m"
}
}
# module.cluster.module.cce.opentelekomcloud_cce_node_v3.az_3[2] will be created
+ resource "opentelekomcloud_cce_node_v3" "az_3" {
+ availability_zone = "eu-de-03"
+ bandwidth_charge_mode = (known after apply)
+ billing_mode = (known after apply)
+ cluster_id = (known after apply)
+ ecs_performance_type = (known after apply)
+ eip_count = 0
+ extend_param_charging_mode = (known after apply)
+ flavor_id = "s2.large.2"
+ id = (known after apply)
+ iptype = (known after apply)
+ key_pair = "dev-ssh-keypair"
+ max_pods = (known after apply)
+ name = "az-3-node-2"
+ order_id = (known after apply)
+ os = (known after apply)
+ private_ip = (known after apply)
+ product_id = (known after apply)
+ public_ip = (known after apply)
+ public_key = (known after apply)
+ region = (known after apply)
+ server_id = (known after apply)
+ sharetype = (known after apply)
+ status = (known after apply)
+ data_volumes {
+ size = 100
+ volumetype = "SATA"
}
+ root_volume {
+ size = 40
+ volumetype = "SATA"
}
+ timeouts {
+ create = "30m"
+ delete = "30m"
}
}
# module.cluster.module.dns.data.opentelekomcloud_dns_zone_v2.tld will be read during apply
# (config refers to values not yet known)
<= data "opentelekomcloud_dns_zone_v2" "tld" {
+ attributes = (known after apply)
+ created_at = (known after apply)
+ id = (known after apply)
+ links = (known after apply)
+ masters = (known after apply)
+ name = "coyotest.com."
+ pool_id = (known after apply)
+ project_id = (known after apply)
+ serial = (known after apply)
+ transferred_at = (known after apply)
+ updated_at = (known after apply)
+ version = (known after apply)
}
# module.cluster.module.dns.opentelekomcloud_dns_recordset_v2.loadbalancer will be created
+ resource "opentelekomcloud_dns_recordset_v2" "loadbalancer" {
+ description = "A record for load balancer"
+ id = (known after apply)
+ name = "dev.coyotest.com."
+ records = [
+ "80.158.7.66",
]
+ region = (known after apply)
+ ttl = 300
+ type = "A"
+ zone_id = (known after apply)
}
# module.cluster.module.dns.opentelekomcloud_dns_recordset_v2.loadbalancer_wildcard will be created
+ resource "opentelekomcloud_dns_recordset_v2" "loadbalancer_wildcard" {
+ description = "Wildcard A record for load balancer"
+ id = (known after apply)
+ name = "*.dev.coyotest.com."
+ records = [
+ "80.158.7.66",
]
+ region = (known after apply)
+ ttl = 300
+ type = "A"
+ zone_id = (known after apply)
}
# module.cluster.module.ssh.opentelekomcloud_compute_keypair_v2.default_ssh_keypair will be created
+ resource "opentelekomcloud_compute_keypair_v2" "default_ssh_keypair" {
+ id = (known after apply)
+ name = "dev-ssh-keypair"
+ public_key = <<-EOT
ssh-ed25519 XXXXX shared_ssh_otc_admin_key
EOT
+ region = (known after apply)
}
# module.cluster.module.vpc.opentelekomcloud_lb_loadbalancer_v2.traefik["eu-de-03"] will be created
+ resource "opentelekomcloud_lb_loadbalancer_v2" "traefik" {
+ admin_state_up = true
+ description = "Traefik"
+ id = (known after apply)
+ loadbalancer_provider = (known after apply)
+ name = "traefik"
+ region = (known after apply)
+ security_group_ids = (known after apply)
+ tenant_id = (known after apply)
+ vip_address = (known after apply)
+ vip_port_id = (known after apply)
+ vip_subnet_id = (known after apply)
}
# module.cluster.module.vpc.opentelekomcloud_nat_gateway_v2.nat_gateway["eu-de-03"] will be created
+ resource "opentelekomcloud_nat_gateway_v2" "nat_gateway" {
+ description = "NAT gateway for eu-de-03"
+ id = (known after apply)
+ internal_network_id = (known after apply)
+ name = "eu-de-03"
+ region = (known after apply)
+ router_id = (known after apply)
+ spec = "1"
+ tenant_id = (known after apply)
}
# module.cluster.module.vpc.opentelekomcloud_nat_snat_rule_v2.private_to_nat["eu-de-03"] will be created
+ resource "opentelekomcloud_nat_snat_rule_v2" "private_to_nat" {
+ floating_ip_id = "5af4715a-aa30-4ebc-92ca-7749ed63348b"
+ id = (known after apply)
+ nat_gateway_id = (known after apply)
+ network_id = (known after apply)
+ region = (known after apply)
}
# module.cluster.module.vpc.opentelekomcloud_networking_floatingip_associate_v2.traefik will be created
+ resource "opentelekomcloud_networking_floatingip_associate_v2" "traefik" {
+ floating_ip = "80.158.7.66"
+ id = (known after apply)
+ port_id = (known after apply)
+ region = (known after apply)
}
# module.cluster.module.vpc.opentelekomcloud_vpc_subnet_v1.private["eu-de-03"] will be created
+ resource "opentelekomcloud_vpc_subnet_v1" "private" {
+ availability_zone = "eu-de-03"
+ cidr = "10.1.1.0/24"
+ dhcp_enable = true
+ dns_list = [
+ "100.125.4.25",
+ "8.8.8.8",
]
+ gateway_ip = "10.1.1.1"
+ id = (known after apply)
+ name = "subnet-private-eu-de-03"
+ primary_dns = (known after apply)
+ region = (known after apply)
+ secondary_dns = (known after apply)
+ subnet_id = (known after apply)
+ tags = {
+ "application" = "tbd"
+ "environment" = "dev"
+ "team" = "nebula"
}
+ vpc_id = (known after apply)
}
# module.cluster.module.vpc.opentelekomcloud_vpc_subnet_v1.public["eu-de-03"] will be created
+ resource "opentelekomcloud_vpc_subnet_v1" "public" {
+ availability_zone = "eu-de-03"
+ cidr = "10.1.2.0/24"
+ dhcp_enable = true
+ dns_list = [
+ "100.125.4.25",
+ "8.8.8.8",
]
+ gateway_ip = "10.1.2.1"
+ id = (known after apply)
+ name = "subnet-public-eu-de-03"
+ primary_dns = (known after apply)
+ region = (known after apply)
+ secondary_dns = (known after apply)
+ subnet_id = (known after apply)
+ tags = {
+ "application" = "tbd"
+ "environment" = "dev"
+ "team" = "nebula"
}
+ vpc_id = (known after apply)
}
# module.cluster.module.vpc.opentelekomcloud_vpc_v1.vpc will be created
+ resource "opentelekomcloud_vpc_v1" "vpc" {
+ cidr = "10.1.0.0/16"
+ id = (known after apply)
+ name = "vpc-dev"
+ region = (known after apply)
+ shared = (known after apply)
+ status = (known after apply)
+ tags = {
+ "application" = "tbd"
+ "environment" = "dev"
+ "team" = "nebula"
}
}
Plan: 29 to add, 0 to change, 0 to destroy.
Changes to Outputs:
+ kubeconfig = (known after apply)
------------------------------------------------------------------------
This plan was saved to: tf-plan.zip
To perform exactly these actions, run the following command to apply:
terraform apply "tf-plan.zip"
$ terraform apply -auto-approve tf-plan.zip
module.cluster.module.vpc.opentelekomcloud_vpc_v1.vpc: Creating...
module.cluster.module.vpc.opentelekomcloud_vpc_v1.vpc: Creation complete after 9s [id=8a8cbccb-0079-42bc-a94e-19144ee403eb]
module.cluster.module.vpc.opentelekomcloud_vpc_subnet_v1.private["eu-de-03"]: Creating...
module.cluster.module.vpc.opentelekomcloud_vpc_subnet_v1.public["eu-de-03"]: Creating...
module.cluster.module.vpc.opentelekomcloud_vpc_subnet_v1.private["eu-de-03"]: Still creating... [10s elapsed]
module.cluster.module.vpc.opentelekomcloud_vpc_subnet_v1.public["eu-de-03"]: Still creating... [10s elapsed]
module.cluster.module.vpc.opentelekomcloud_vpc_subnet_v1.public["eu-de-03"]: Creation complete after 10s [id=5e0785ab-9428-4046-a963-abc1c9bbf566]
module.cluster.module.vpc.opentelekomcloud_nat_gateway_v2.nat_gateway["eu-de-03"]: Creating...
module.cluster.module.vpc.opentelekomcloud_lb_loadbalancer_v2.traefik["eu-de-03"]: Creating...
module.cluster.module.vpc.opentelekomcloud_vpc_subnet_v1.private["eu-de-03"]: Creation complete after 16s [id=f16a4835-7a2d-46b3-a65e-2b8ca7171dda]
module.cluster.module.vpc.opentelekomcloud_lb_loadbalancer_v2.traefik["eu-de-03"]: Creation complete after 9s [id=0897dade-69ee-4e2e-b32f-00f0532573b1]
module.cluster.module.vpc.opentelekomcloud_networking_floatingip_associate_v2.traefik: Creating...
module.cluster.module.vpc.opentelekomcloud_nat_gateway_v2.nat_gateway["eu-de-03"]: Creation complete after 10s [id=64e5f520-eacc-4a32-9c9f-33dc28586a50]
module.cluster.module.vpc.opentelekomcloud_nat_snat_rule_v2.private_to_nat["eu-de-03"]: Creating...
module.cluster.module.vpc.opentelekomcloud_networking_floatingip_associate_v2.traefik: Creation complete after 4s [id=c4a49b62-45e1-44bd-b157-178b2638c329]
module.cluster.module.vpc.opentelekomcloud_nat_snat_rule_v2.private_to_nat["eu-de-03"]: Creation complete after 9s [id=2e84ddfd-c283-4f6a-8970-07b96112fb82]
module.cluster.module.dns.data.opentelekomcloud_dns_zone_v2.tld: Reading...
module.cluster.module.ssh.opentelekomcloud_compute_keypair_v2.default_ssh_keypair: Creating...
module.cluster.module.dns.data.opentelekomcloud_dns_zone_v2.tld: Read complete after 1s [id=ff80808275f5fb9c0176fd67a14834ff]
module.cluster.module.dns.opentelekomcloud_dns_recordset_v2.loadbalancer: Creating...
module.cluster.module.dns.opentelekomcloud_dns_recordset_v2.loadbalancer_wildcard: Creating...
Error: Error creating OpenTelekomCloud DNS record set: Bad request with: [POST https://dns.eu-de.otc.t-systems.com/v2/zones/ff80808275f5fb9c0176fd67a14834ff/recordsets], error message: {"code":"DNS.0312","message":"Attribute 'name' conflicts with Record Set 'dev.coyotest.com.' type 'A' in line 'default_view'."}
on ../../modules/infrastructure/dns/main.tf line 5, in resource "opentelekomcloud_dns_recordset_v2" "loadbalancer":
5: resource "opentelekomcloud_dns_recordset_v2" "loadbalancer" {
Error: Error creating OpenTelekomCloud DNS record set: Bad request with: [POST https://dns.eu-de.otc.t-systems.com/v2/zones/ff80808275f5fb9c0176fd67a14834ff/recordsets], error message: {"code":"DNS.0312","message":"Attribute 'name' conflicts with Record Set '*.dev.coyotest.com.' type 'A' in line 'default_view'."}
on ../../modules/infrastructure/dns/main.tf line 14, in resource "opentelekomcloud_dns_recordset_v2" "loadbalancer_wildcard":
14: resource "opentelekomcloud_dns_recordset_v2" "loadbalancer_wildcard" {
Error: Error creating OpenTelekomCloud keypair: Expected HTTP response code [200] when accessing [POST https://ecs.eu-de.otc.t-systems.com/v2.1/1529f1a1de5d4fe98ac879a8f07e6154/os-keypairs], but got 409 instead
{"conflictingRequest": {"message": "Key pair 'dev-ssh-keypair' already exists.", "code": 409}}
on ../../modules/infrastructure/ssh/main.tf line 1, in resource "opentelekomcloud_compute_keypair_v2" "default_ssh_keypair":
1: resource "opentelekomcloud_compute_keypair_v2" "default_ssh_keypair" {
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment