Last active
December 28, 2020 07:16
-
-
Save rodolfoap/f14a8974ad96314ab162bd97478c9198 to your computer and use it in GitHub Desktop.
Kubernetes modules rebooting
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Flag --insecure-port has been deprecated, This flag has no effect now and will be removed in v1.24. | |
I1228 06:10:45.350181 1 server.go:632] external host was not specified, using 192.168.1.91 | |
I1228 06:10:45.351642 1 server.go:182] Version: v1.20.1 | |
I1228 06:10:51.150476 1 shared_informer.go:240] Waiting for caches to sync for node_authorizer | |
I1228 06:10:51.156000 1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook. | |
I1228 06:10:51.156093 1 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota. | |
I1228 06:10:51.161833 1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook. | |
I1228 06:10:51.161926 1 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota. | |
I1228 06:10:51.178069 1 client.go:360] parsed scheme: "endpoint" | |
I1228 06:10:51.178397 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] | |
I1228 06:10:51.341273 1 client.go:360] parsed scheme: "endpoint" | |
I1228 06:10:51.341593 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] | |
I1228 06:10:51.423718 1 client.go:360] parsed scheme: "passthrough" | |
I1228 06:10:51.424247 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>} | |
I1228 06:10:51.424369 1 clientconn.go:948] ClientConn switching balancer to "pick_first" | |
I1228 06:10:51.427486 1 client.go:360] parsed scheme: "endpoint" | |
I1228 06:10:51.427615 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] | |
I1228 06:10:51.671655 1 instance.go:289] Using reconciler: lease | |
I1228 06:10:51.673452 1 client.go:360] parsed scheme: "endpoint" | |
I1228 06:10:51.673533 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] | |
I1228 06:10:51.760098 1 client.go:360] parsed scheme: "endpoint" | |
I1228 06:10:51.760191 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] | |
I1228 06:10:51.870890 1 client.go:360] parsed scheme: "endpoint" | |
I1228 06:10:53.531932 1 rest.go:131] the default service ipfamily for this cluster is: IPv4 | |
I1228 06:10:53.849365 1 client.go:360] parsed scheme: "endpoint" | |
(...) | |
I1228 06:10:59.416630 1 client.go:360] parsed scheme: "endpoint" | |
I1228 06:10:59.416771 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] | |
W1228 06:11:00.067882 1 genericapiserver.go:419] Skipping API batch/v2alpha1 because it has no resources. | |
W1228 06:11:00.107988 1 genericapiserver.go:419] Skipping API discovery.k8s.io/v1alpha1 because it has no resources. | |
W1228 06:11:00.152290 1 genericapiserver.go:419] Skipping API node.k8s.io/v1alpha1 because it has no resources. | |
W1228 06:11:00.186949 1 genericapiserver.go:419] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources. | |
W1228 06:11:00.201543 1 genericapiserver.go:419] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources. | |
W1228 06:11:00.226327 1 genericapiserver.go:419] Skipping API storage.k8s.io/v1alpha1 because it has no resources. | |
W1228 06:11:00.236999 1 genericapiserver.go:419] Skipping API flowcontrol.apiserver.k8s.io/v1alpha1 because it has no resources. | |
W1228 06:11:00.258556 1 genericapiserver.go:419] Skipping API apps/v1beta2 because it has no resources. | |
W1228 06:11:00.258628 1 genericapiserver.go:419] Skipping API apps/v1beta1 because it has no resources. | |
I1228 06:11:00.297860 1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook. | |
I1228 06:11:00.297927 1 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota. | |
I1228 06:11:00.307473 1 client.go:360] parsed scheme: "endpoint" | |
I1228 06:11:00.307560 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] | |
I1228 06:11:00.471041 1 client.go:360] parsed scheme: "endpoint" | |
I1228 06:11:00.471375 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] | |
I1228 06:11:12.392612 1 dynamic_cafile_content.go:167] Starting request-header::/etc/kubernetes/pki/front-proxy-ca.crt | |
I1228 06:11:12.392711 1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/etc/kubernetes/pki/ca.crt | |
I1228 06:11:12.393774 1 dynamic_serving_content.go:130] Starting serving-cert::/etc/kubernetes/pki/apiserver.crt::/etc/kubernetes/pki/apiserver.key | |
I1228 06:11:12.396429 1 secure_serving.go:197] Serving securely on [::]:6443 | |
I1228 06:11:12.397276 1 apf_controller.go:249] Starting API Priority and Fairness config controller | |
I1228 06:11:12.397327 1 customresource_discovery_controller.go:209] Starting DiscoveryController | |
I1228 06:11:12.397498 1 controller.go:83] Starting OpenAPI AggregationController | |
I1228 06:11:12.399595 1 naming_controller.go:291] Starting NamingConditionController | |
I1228 06:11:12.399766 1 crd_finalizer.go:266] Starting CRDFinalizer | |
I1228 06:11:12.399854 1 available_controller.go:475] Starting AvailableConditionController | |
I1228 06:11:12.399914 1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller | |
I1228 06:11:12.400055 1 establishing_controller.go:76] Starting EstablishingController | |
I1228 06:11:12.400105 1 dynamic_serving_content.go:130] Starting aggregator-proxy-cert::/etc/kubernetes/pki/front-proxy-client.crt::/etc/kubernetes/pki/front-proxy-client.key | |
I1228 06:11:12.400227 1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController | |
I1228 06:11:12.400565 1 apiservice_controller.go:97] Starting APIServiceRegistrationController | |
I1228 06:11:12.400645 1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller | |
I1228 06:11:12.400679 1 autoregister_controller.go:141] Starting autoregister controller | |
I1228 06:11:12.400688 1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller | |
I1228 06:11:12.400722 1 cache.go:32] Waiting for caches to sync for autoregister controller | |
I1228 06:11:12.400749 1 shared_informer.go:240] Waiting for caches to sync for cluster_authentication_trust_controller | |
I1228 06:11:12.400174 1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController | |
I1228 06:11:12.400933 1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/etc/kubernetes/pki/ca.crt | |
I1228 06:11:12.401095 1 dynamic_cafile_content.go:167] Starting request-header::/etc/kubernetes/pki/front-proxy-ca.crt | |
I1228 06:11:12.404292 1 controller.go:86] Starting OpenAPI controller | |
I1228 06:11:12.404483 1 crdregistration_controller.go:111] Starting crd-autoregister controller | |
I1228 06:11:12.404518 1 shared_informer.go:240] Waiting for caches to sync for crd-autoregister | |
I1228 06:11:12.421259 1 tlsconfig.go:240] Starting DynamicServingCertificateController | |
E1228 06:11:12.422146 1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.1.91, ResourceVersion: 0, AdditionalErrorMsg: | |
I1228 06:11:13.427016 1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue). | |
I1228 06:11:13.427092 1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue). | |
I1228 06:11:14.000416 1 cache.go:39] Caches are synced for AvailableConditionController controller | |
I1228 06:11:14.001172 1 cache.go:39] Caches are synced for autoregister controller | |
I1228 06:11:14.017064 1 storage_scheduling.go:148] all system priority classes are created successfully or already exist. | |
I1228 06:11:14.018568 1 shared_informer.go:247] Caches are synced for crd-autoregister | |
I1228 06:11:14.018645 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller | |
I1228 06:11:14.018695 1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller | |
I1228 06:11:14.060050 1 shared_informer.go:247] Caches are synced for node_authorizer | |
I1228 06:11:14.078230 1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io | |
I1228 06:11:14.097562 1 apf_controller.go:253] Running API Priority and Fairness config worker | |
I1228 06:11:36.455058 1 client.go:360] parsed scheme: "passthrough" | |
I1228 06:11:36.455383 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>} | |
I1228 06:11:36.455485 1 clientconn.go:948] ClientConn switching balancer to "pick_first" | |
I1228 06:12:17.160922 1 client.go:360] parsed scheme: "passthrough" | |
I1228 06:12:17.161016 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>} | |
I1228 06:12:17.161053 1 clientconn.go:948] ClientConn switching balancer to "pick_first" | |
I1228 06:12:50.076804 1 client.go:360] parsed scheme: "passthrough" | |
(...) | |
E1228 06:21:14.064304 1 storage_flowcontrol.go:137] failed creating mandatory flowcontrol settings: failed getting mandatory FlowSchema exempt due to the server was unable to return a response in the time allotted, but may still be processing the request (get flowschemas.flowcontrol.apiserver.k8s.io exempt), will retry later | |
E1228 06:21:14.208693 1 repair.go:118] unable to refresh the service IP block: the server was unable to return a response in the time allotted, but may still be processing the request (get services) | |
E1228 06:21:14.229966 1 repair.go:75] unable to refresh the port block: the server was unable to return a response in the time allotted, but may still be processing the request (get services) | |
E1228 06:21:16.165956 1 controller.go:203] unable to create required kubernetes system namespace kube-system: the server was unable to return a response in the time allotted, but may still be processing the request (post namespaces) | |
I1228 06:21:21.842662 1 client.go:360] parsed scheme: "passthrough" | |
I1228 06:21:21.843006 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>} | |
I1228 06:21:21.843103 1 clientconn.go:948] ClientConn switching balancer to "pick_first" | |
I1228 06:21:23.834212 1 dynamic_cafile_content.go:182] Shutting down request-header::/etc/kubernetes/pki/front-proxy-ca.crt | |
I1228 06:21:23.834351 1 controller.go:181] Shutting down kubernetes service endpoint reconciler | |
I1228 06:21:23.839303 1 controller.go:123] Shutting down OpenAPI controller | |
I1228 06:21:23.839415 1 apiservice_controller.go:131] Shutting down APIServiceRegistrationController | |
I1228 06:21:23.839576 1 cluster_authentication_trust_controller.go:463] Shutting down cluster_authentication_trust_controller controller | |
I1228 06:21:23.839651 1 crdregistration_controller.go:142] Shutting down crd-autoregister controller | |
I1228 06:21:23.839740 1 naming_controller.go:302] Shutting down NamingConditionController | |
I1228 06:21:23.839807 1 nonstructuralschema_controller.go:204] Shutting down NonStructuralSchemaConditionController | |
I1228 06:21:23.839880 1 establishing_controller.go:87] Shutting down EstablishingController | |
I1228 06:21:23.839952 1 crd_finalizer.go:278] Shutting down CRDFinalizer | |
I1228 06:21:23.840029 1 apiapproval_controller.go:198] Shutting down KubernetesAPIApprovalPolicyConformantConditionController | |
I1228 06:21:23.840106 1 autoregister_controller.go:165] Shutting down autoregister controller | |
I1228 06:21:23.840195 1 available_controller.go:487] Shutting down AvailableConditionController | |
I1228 06:21:23.840289 1 customresource_discovery_controller.go:245] Shutting down DiscoveryController | |
E1228 06:21:23.840254 1 controller.go:184] StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.1.91, ResourceVersion: 0, AdditionalErrorMsg: | |
I1228 06:21:23.842576 1 dynamic_cafile_content.go:182] Shutting down request-header::/etc/kubernetes/pki/front-proxy-ca.crt | |
I1228 06:21:23.842678 1 dynamic_cafile_content.go:182] Shutting down client-ca-bundle::/etc/kubernetes/pki/ca.crt | |
I1228 06:21:23.842752 1 dynamic_serving_content.go:145] Shutting down aggregator-proxy-cert::/etc/kubernetes/pki/front-proxy-client.crt::/etc/kubernetes/pki/front-proxy-client.key | |
I1228 06:21:23.842823 1 controller.go:89] Shutting down OpenAPI AggregationController | |
I1228 06:21:23.843632 1 tlsconfig.go:255] Shutting down DynamicServingCertificateController | |
I1228 06:21:23.843770 1 dynamic_serving_content.go:145] Shutting down serving-cert::/etc/kubernetes/pki/apiserver.crt::/etc/kubernetes/pki/apiserver.key | |
I1228 06:21:23.843930 1 dynamic_cafile_content.go:182] Shutting down client-ca-bundle::/etc/kubernetes/pki/ca.crt | |
I1228 06:21:23.868442 1 secure_serving.go:241] Stopped listening on [::]:6443 | |
E1228 06:21:23.872144 1 controller.go:223] unable to sync kubernetes service: Post "https://localhost:6443/api/v1/namespaces": http2: server sent GOAWAY and closed the connection; LastStreamID=945, ErrCode=NO_ERROR, debug="" | |
E1228 06:21:24.922424 1 controller.go:203] unable to create required kubernetes system namespace kube-public: Post "https://localhost:6443/api/v1/namespaces": dial tcp 127.0.0.1:6443: connect: connection refused | |
E1228 06:21:24.934137 1 controller.go:203] unable to create required kubernetes system namespace kube-node-lease: Post "https://localhost:6443/api/v1/namespaces": dial tcp 127.0.0.1:6443: connect: connection refused |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
I1228 06:19:34.486049 1 serving.go:331] Generated self-signed cert in-memory | |
W1228 06:19:49.653513 1 authentication.go:332] Error looking up in-cluster authentication configuration: Get "https://192.168.1.91:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication?timeout=10s": context deadline exceeded (Client.Timeout exceeded while awaiting headers) | |
W1228 06:19:49.654049 1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous. | |
W1228 06:19:49.654146 1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false | |
I1228 06:20:21.738818 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file | |
I1228 06:20:21.739006 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file | |
I1228 06:20:21.743749 1 secure_serving.go:197] Serving securely on 127.0.0.1:10259 | |
I1228 06:20:21.746133 1 tlsconfig.go:240] Starting DynamicServingCertificateController | |
I1228 06:20:31.744194 1 trace.go:205] Trace[43007490]: "Reflector ListAndWatch" name:k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206 (28-Dec-2020 06:20:21.741) (total time: 10002ms): | |
Trace[43007490]: [10.002347304s] [10.002347304s] END | |
E1228 06:20:31.744402 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.1.91:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": context deadline exceeded (Client.Timeout exceeded while awaiting headers) | |
I1228 06:20:43.256486 1 trace.go:205] Trace[110251461]: "Reflector ListAndWatch" name:k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206 (28-Dec-2020 06:20:33.253) (total time: 10002ms): | |
Trace[110251461]: [10.002183631s] [10.002183631s] END | |
E1228 06:20:43.256735 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.1.91:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": context deadline exceeded (Client.Timeout exceeded while awaiting headers) | |
I1228 06:20:56.296574 1 trace.go:205] Trace[1790161166]: "Reflector ListAndWatch" name:k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206 (28-Dec-2020 06:20:46.290) (total time: 10005ms): | |
Trace[1790161166]: [10.005548632s] [10.005548632s] END | |
E1228 06:20:56.297109 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.1.91:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": context deadline exceeded (Client.Timeout exceeded while awaiting headers) | |
I1228 06:21:11.451753 1 trace.go:205] Trace[1933831926]: "Reflector ListAndWatch" name:k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206 (28-Dec-2020 06:21:01.447) (total time: 10003ms): | |
Trace[1933831926]: [10.003351842s] [10.003351842s] END | |
E1228 06:21:11.452203 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.1.91:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": context deadline exceeded (Client.Timeout exceeded while awaiting headers) | |
I1228 06:21:21.750616 1 trace.go:205] Trace[177667709]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (28-Dec-2020 06:20:21.745) (total time: 60004ms): | |
Trace[177667709]: [1m0.00474248s] [1m0.00474248s] END | |
E1228 06:21:21.750840 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: the server was unable to return a response in the time allotted, but may still be processing the request (get statefulsets.apps) | |
I1228 06:21:21.751884 1 trace.go:205] Trace[1322787112]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (28-Dec-2020 06:20:21.745) (total time: 60006ms): | |
Trace[1322787112]: [1m0.006198534s] [1m0.006198534s] END | |
E1228 06:21:21.752010 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: the server was unable to return a response in the time allotted, but may still be processing the request (get replicasets.apps) | |
I1228 06:21:21.757261 1 trace.go:205] Trace[2038839098]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (28-Dec-2020 06:20:21.747) (total time: 60009ms): | |
Trace[2038839098]: [1m0.009569326s] [1m0.009569326s] END | |
E1228 06:21:21.757408 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: the server was unable to return a response in the time allotted, but may still be processing the request (get storageclasses.storage.k8s.io) | |
I1228 06:21:21.759973 1 trace.go:205] Trace[797821890]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (28-Dec-2020 06:20:21.754) (total time: 60005ms): | |
Trace[797821890]: [1m0.005549427s] [1m0.005549427s] END | |
E1228 06:21:21.760055 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: the server was unable to return a response in the time allotted, but may still be processing the request (get pods) | |
I1228 06:21:21.776930 1 trace.go:205] Trace[417148062]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (28-Dec-2020 06:20:21.769) (total time: 60006ms): | |
Trace[417148062]: [1m0.006785764s] [1m0.006785764s] END | |
E1228 06:21:21.777063 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: the server was unable to return a response in the time allotted, but may still be processing the request (get persistentvolumeclaims) | |
I1228 06:21:21.778198 1 trace.go:205] Trace[639590296]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (28-Dec-2020 06:20:21.770) (total time: 60007ms): | |
Trace[639590296]: [1m0.007338868s] [1m0.007338868s] END | |
E1228 06:21:21.778298 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: the server was unable to return a response in the time allotted, but may still be processing the request (get services) | |
I1228 06:21:21.800185 1 trace.go:205] Trace[1193642553]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (28-Dec-2020 06:20:21.787) (total time: 60012ms): | |
Trace[1193642553]: [1m0.012241717s] [1m0.012241717s] END | |
E1228 06:21:21.800332 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: the server was unable to return a response in the time allotted, but may still be processing the request (get csinodes.storage.k8s.io) | |
I1228 06:21:21.801145 1 trace.go:205] Trace[2045125395]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (28-Dec-2020 06:20:21.789) (total time: 60010ms): | |
Trace[2045125395]: [1m0.010551445s] [1m0.010551445s] END | |
E1228 06:21:21.801288 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: the server was unable to return a response in the time allotted, but may still be processing the request (get replicationcontrollers) | |
I1228 06:21:21.804897 1 trace.go:205] Trace[209631218]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (28-Dec-2020 06:20:21.791) (total time: 60012ms): | |
Trace[209631218]: [1m0.012927742s] [1m0.012927742s] END | |
E1228 06:21:21.805020 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: the server was unable to return a response in the time allotted, but may still be processing the request (get persistentvolumes) | |
I1228 06:21:21.805052 1 trace.go:205] Trace[1409966276]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (28-Dec-2020 06:20:21.786) (total time: 60018ms): | |
Trace[1409966276]: [1m0.018494157s] [1m0.018494157s] END | |
I1228 06:21:21.805104 1 trace.go:205] Trace[1708619672]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (28-Dec-2020 06:20:21.791) (total time: 60013ms): | |
Trace[1708619672]: [1m0.013506597s] [1m0.013506597s] END | |
E1228 06:21:21.805126 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: the server was unable to return a response in the time allotted, but may still be processing the request (get nodes) | |
E1228 06:21:21.805172 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: the server was unable to return a response in the time allotted, but may still be processing the request (get poddisruptionbudgets.policy) | |
E1228 06:21:24.858247 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.1.91:6443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.1.91:6443: connect: connection refused | |
E1228 06:21:24.859186 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.1.91:6443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.1.91:6443: connect: connection refused | |
E1228 06:21:24.860447 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.1.91:6443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.1.91:6443: connect: connection refused | |
E1228 06:21:24.860311 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.1.91:6443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.1.91:6443: connect: connection refused | |
E1228 06:21:24.862581 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.1.91:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.1.91:6443: connect: connection refused | |
E1228 06:21:24.862923 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.1.91:6443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.1.91:6443: connect: connection refused | |
E1228 06:21:24.881381 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.1.91:6443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.1.91:6443: connect: connection refused | |
E1228 06:21:24.891230 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.1.91:6443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.1.91:6443: connect: connection refused | |
E1228 06:21:24.896832 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.1.91:6443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.1.91:6443: connect: connection refused | |
E1228 06:21:24.900628 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: Get "https://192.168.1.91:6443/apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.1.91:6443: connect: connection refused | |
E1228 06:21:24.902968 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.1.91:6443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.1.91:6443: connect: connection refused | |
E1228 06:21:24.912233 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.1.91:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.1.91:6443: connect: connection refused | |
E1228 06:21:26.609511 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.1.91:6443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.1.91:6443: connect: connection refused | |
E1228 06:21:26.717940 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.1.91:6443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.1.91:6443: connect: connection refused | |
E1228 06:21:26.738979 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.1.91:6443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.1.91:6443: connect: connection refused | |
E1228 06:21:26.778244 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.1.91:6443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.1.91:6443: connect: connection refused | |
E1228 06:21:26.795518 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.1.91:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.1.91:6443: connect: connection refused | |
E1228 06:21:27.031177 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.1.91:6443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.1.91:6443: connect: connection refused | |
E1228 06:21:27.152257 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.1.91:6443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.1.91:6443: connect: connection refused | |
E1228 06:21:27.229879 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: Get "https://192.168.1.91:6443/apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.1.91:6443: connect: connection refused | |
E1228 06:21:27.387030 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.1.91:6443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.1.91:6443: connect: connection refused | |
E1228 06:21:27.442541 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.1.91:6443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.1.91:6443: connect: connection refused | |
E1228 06:21:28.062616 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.1.91:6443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.1.91:6443: connect: connection refused | |
E1228 06:21:30.181034 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.1.91:6443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.1.91:6443: connect: connection refused | |
E1228 06:21:30.539029 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.1.91:6443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.1.91:6443: connect: connection refused | |
E1228 06:21:30.913778 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.1.91:6443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.1.91:6443: connect: connection refused | |
E1228 06:21:30.926229 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.1.91:6443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.1.91:6443: connect: connection refused | |
E1228 06:21:31.392607 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.1.91:6443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.1.91:6443: connect: connection refused | |
E1228 06:21:31.432153 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.1.91:6443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.1.91:6443: connect: connection refused | |
E1228 06:21:31.591536 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.1.91:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.1.91:6443: connect: connection refused | |
E1228 06:21:31.898974 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: Get "https://192.168.1.91:6443/apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.1.91:6443: connect: connection refused | |
E1228 06:21:32.355141 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.1.91:6443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.1.91:6443: connect: connection refused | |
E1228 06:21:32.437062 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.1.91:6443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.1.91:6443: connect: connection refused | |
E1228 06:21:33.391145 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.1.91:6443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.1.91:6443: connect: connection refused | |
E1228 06:21:38.105817 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.1.91:6443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.1.91:6443: connect: connection refused | |
E1228 06:21:38.833935 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: Get "https://192.168.1.91:6443/apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.1.91:6443: connect: connection refused | |
E1228 06:21:40.261771 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.1.91:6443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.1.91:6443: connect: connection refused | |
E1228 06:21:41.395826 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.1.91:6443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.1.91:6443: connect: connection refused | |
E1228 06:21:42.010255 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.1.91:6443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.1.91:6443: connect: connection refused | |
E1228 06:21:42.467683 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.1.91:6443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.1.91:6443: connect: connection refused | |
E1228 06:21:42.613866 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.1.91:6443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.1.91:6443: connect: connection refused | |
E1228 06:21:42.786343 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.1.91:6443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.1.91:6443: connect: connection refused | |
E1228 06:21:42.789700 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.1.91:6443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.1.91:6443: connect: connection refused | |
E1228 06:21:42.966557 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.1.91:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.1.91:6443: connect: connection refused | |
E1228 06:21:46.187801 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.1.91:6443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.1.91:6443: connect: connection refused | |
E1228 06:21:50.040452 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.1.91:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.1.91:6443: connect: connection refused | |
E1228 06:21:51.730580 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: Get "https://192.168.1.91:6443/apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.1.91:6443: connect: connection refused | |
I1228 06:22:06.012619 1 trace.go:205] Trace[1647299697]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (28-Dec-2020 06:21:56.006) (total time: 10006ms): | |
Trace[1647299697]: [10.006391913s] [10.006391913s] END | |
E1228 06:22:06.012709 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.1.91:6443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": net/http: TLS handshake timeout | |
I1228 06:22:06.061211 1 trace.go:205] Trace[1113264501]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (28-Dec-2020 06:21:56.059) (total time: 10001ms): | |
Trace[1113264501]: [10.001470938s] [10.001470938s] END | |
E1228 06:22:06.061272 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.1.91:6443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": net/http: TLS handshake timeout | |
I1228 06:22:07.545292 1 trace.go:205] Trace[1756286509]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (28-Dec-2020 06:21:57.539) (total time: 10005ms): | |
Trace[1756286509]: [10.005857143s] [10.005857143s] END | |
E1228 06:22:07.545449 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.1.91:6443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": net/http: TLS handshake timeout | |
I1228 06:22:09.539350 1 trace.go:205] Trace[795596707]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (28-Dec-2020 06:21:59.535) (total time: 10003ms): | |
Trace[795596707]: [10.003553141s] [10.003553141s] END | |
E1228 06:22:09.539464 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.1.91:6443/api/v1/persistentvolumes?limit=500&resourceVersion=0": net/http: TLS handshake timeout | |
I1228 06:22:09.733407 1 trace.go:205] Trace[43903513]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (28-Dec-2020 06:21:59.731) (total time: 10001ms): | |
Trace[43903513]: [10.001626152s] [10.001626152s] END | |
E1228 06:22:09.733484 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.1.91:6443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": net/http: TLS handshake timeout | |
I1228 06:22:09.944413 1 trace.go:205] Trace[641476292]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (28-Dec-2020 06:21:59.941) (total time: 10002ms): | |
Trace[641476292]: [10.002347429s] [10.002347429s] END | |
E1228 06:22:09.944550 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.1.91:6443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout | |
I1228 06:22:10.588360 1 trace.go:205] Trace[103064766]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (28-Dec-2020 06:22:00.585) (total time: 10002ms): | |
Trace[103064766]: [10.002473142s] [10.002473142s] END | |
E1228 06:22:10.588428 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.1.91:6443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": net/http: TLS handshake timeout | |
I1228 06:22:11.672997 1 trace.go:205] Trace[665314441]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (28-Dec-2020 06:22:01.669) (total time: 10003ms): | |
Trace[665314441]: [10.003048747s] [10.003048747s] END | |
E1228 06:22:11.673194 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.1.91:6443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": net/http: TLS handshake timeout | |
I1228 06:22:12.076255 1 trace.go:205] Trace[993407643]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (28-Dec-2020 06:22:02.071) (total time: 10004ms): | |
Trace[993407643]: [10.00407216s] [10.00407216s] END | |
E1228 06:22:12.076522 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.1.91:6443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout | |
I1228 06:22:15.583958 1 trace.go:205] Trace[1182452183]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (28-Dec-2020 06:22:05.581) (total time: 10002ms): | |
Trace[1182452183]: [10.002368806s] [10.002368806s] END | |
E1228 06:22:15.584076 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.1.91:6443/api/v1/nodes?limit=500&resourceVersion=0": net/http: TLS handshake timeout | |
I1228 06:22:24.239610 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file | |
I1228 06:22:57.145869 1 leaderelection.go:243] attempting to acquire leader lease kube-system/kube-scheduler... | |
I1228 06:23:15.777843 1 leaderelection.go:253] successfully acquired lease kube-system/kube-scheduler | |
E1228 06:25:48.761109 1 leaderelection.go:325] error retrieving resource lock kube-system/kube-scheduler: Get "https://192.168.1.91:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-scheduler?timeout=10s": context deadline exceeded (Client.Timeout exceeded while awaiting headers) | |
I1228 06:25:48.761454 1 leaderelection.go:278] failed to renew lease kube-system/kube-scheduler: timed out waiting for the condition | |
F1228 06:25:48.761693 1 server.go:205] leaderelection lost | |
goroutine 1 [running]: | |
k8s.io/kubernetes/vendor/k8s.io/klog/v2.stacks(0x21dba01, 0x0, 0x41, 0xd5) | |
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1026 +0x94 | |
k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).output(0x21c9758, 0x3, 0x0, 0x0, 0x2b0e000, 0x210707c, 0x9, 0xcd, 0x0) | |
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:975 +0x110 | |
k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).printf(0x21c9758, 0x3, 0x0, 0x0, 0x0, 0x0, 0x1458c2b, 0x13, 0x0, 0x0, ...) | |
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:750 +0x130 | |
k8s.io/kubernetes/vendor/k8s.io/klog/v2.Fatalf(...) | |
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1502 | |
k8s.io/kubernetes/cmd/kube-scheduler/app.Run.func3() | |
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:205 +0x78 | |
k8s.io/kubernetes/vendor/k8s.io/client-go/tools/leaderelection.(*LeaderElector).Run.func1(0x294a790) | |
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/leaderelection/leaderelection.go:199 +0x20 | |
k8s.io/kubernetes/vendor/k8s.io/client-go/tools/leaderelection.(*LeaderElector).Run(0x294a790, 0x1686918, 0x2a9c500) | |
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/leaderelection/leaderelection.go:209 +0x11c | |
k8s.io/kubernetes/cmd/kube-scheduler/app.Run(0x1686918, 0x2682c00, 0x279a7f0, 0x25b3080, 0x0, 0x0) | |
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:213 +0x400 | |
k8s.io/kubernetes/cmd/kube-scheduler/app.runCommand(0x2922c60, 0x2942000, 0x0, 0x0, 0x0, 0x0, 0x0) | |
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:132 +0x104 | |
k8s.io/kubernetes/cmd/kube-scheduler/app.NewSchedulerCommand.func1(0x2922c60, 0x25b2990, 0x0, 0x6) | |
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:81 +0x3c | |
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute(0x2922c60, 0x251a0c8, 0x6, 0x7, 0x2922c60, 0x251a0c8) | |
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:854 +0x1e8 | |
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0x2922c60, 0x1521950, 0x0, 0x2922c60) | |
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:958 +0x274 | |
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute(...) | |
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:895 | |
main.main() | |
_output/dockerized/go/src/k8s.io/kubernetes/cmd/kube-scheduler/scheduler.go:46 +0x10c | |
(a lot of similar debugging data follows) |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Flag --port has been deprecated, see --secure-port instead. | |
I1228 06:19:32.557654 1 serving.go:331] Generated self-signed cert in-memory | |
I1228 06:19:35.759500 1 controllermanager.go:176] Version: v1.20.1 | |
I1228 06:19:35.766201 1 secure_serving.go:197] Serving securely on 127.0.0.1:10257 | |
I1228 06:19:35.766359 1 leaderelection.go:243] attempting to acquire leader lease kube-system/kube-controller-manager... | |
I1228 06:19:35.767154 1 dynamic_cafile_content.go:167] Starting request-header::/etc/kubernetes/pki/front-proxy-ca.crt | |
I1228 06:19:35.767262 1 tlsconfig.go:240] Starting DynamicServingCertificateController | |
I1228 06:19:35.767408 1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/etc/kubernetes/pki/ca.crt | |
E1228 06:19:45.767408 1 leaderelection.go:325] error retrieving resource lock kube-system/kube-controller-manager: Get "https://192.168.1.91:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=10s": context deadline exceeded (Client.Timeout exceeded while awaiting headers) | |
E1228 06:19:59.268203 1 leaderelection.go:325] error retrieving resource lock kube-system/kube-controller-manager: Get "https://192.168.1.91:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=10s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) | |
E1228 06:20:13.666697 1 leaderelection.go:325] error retrieving resource lock kube-system/kube-controller-manager: Get "https://192.168.1.91:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=10s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) | |
E1228 06:20:27.747423 1 leaderelection.go:325] error retrieving resource lock kube-system/kube-controller-manager: Get "https://192.168.1.91:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=10s": context deadline exceeded (Client.Timeout exceeded while awaiting headers) | |
E1228 06:20:41.239134 1 leaderelection.go:325] error retrieving resource lock kube-system/kube-controller-manager: Get "https://192.168.1.91:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=10s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) | |
E1228 06:20:53.276733 1 leaderelection.go:325] error retrieving resource lock kube-system/kube-controller-manager: Get "https://192.168.1.91:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=10s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) | |
E1228 06:21:06.289232 1 leaderelection.go:325] error retrieving resource lock kube-system/kube-controller-manager: Get "https://192.168.1.91:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=10s": context deadline exceeded | |
E1228 06:21:20.253313 1 leaderelection.go:325] error retrieving resource lock kube-system/kube-controller-manager: Get "https://192.168.1.91:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=10s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) | |
E1228 06:21:24.896543 1 leaderelection.go:325] error retrieving resource lock kube-system/kube-controller-manager: Get "https://192.168.1.91:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=10s": dial tcp 192.168.1.91:6443: connect: connection refused | |
E1228 06:21:27.791905 1 leaderelection.go:325] error retrieving resource lock kube-system/kube-controller-manager: Get "https://192.168.1.91:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=10s": dial tcp 192.168.1.91:6443: connect: connection refused | |
E1228 06:21:30.808617 1 leaderelection.go:325] error retrieving resource lock kube-system/kube-controller-manager: Get "https://192.168.1.91:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=10s": dial tcp 192.168.1.91:6443: connect: connection refused | |
E1228 06:21:35.037061 1 leaderelection.go:325] error retrieving resource lock kube-system/kube-controller-manager: Get "https://192.168.1.91:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=10s": dial tcp 192.168.1.91:6443: connect: connection refused | |
E1228 06:21:38.633198 1 leaderelection.go:325] error retrieving resource lock kube-system/kube-controller-manager: Get "https://192.168.1.91:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=10s": dial tcp 192.168.1.91:6443: connect: connection refused | |
E1228 06:21:42.139896 1 leaderelection.go:325] error retrieving resource lock kube-system/kube-controller-manager: Get "https://192.168.1.91:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=10s": dial tcp 192.168.1.91:6443: connect: connection refused | |
E1228 06:21:44.952315 1 leaderelection.go:325] error retrieving resource lock kube-system/kube-controller-manager: Get "https://192.168.1.91:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=10s": dial tcp 192.168.1.91:6443: connect: connection refused | |
E1228 06:21:48.506614 1 leaderelection.go:325] error retrieving resource lock kube-system/kube-controller-manager: Get "https://192.168.1.91:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=10s": dial tcp 192.168.1.91:6443: connect: connection refused | |
E1228 06:21:50.583735 1 leaderelection.go:325] error retrieving resource lock kube-system/kube-controller-manager: Get "https://192.168.1.91:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=10s": dial tcp 192.168.1.91:6443: connect: connection refused | |
E1228 06:21:53.815199 1 leaderelection.go:325] error retrieving resource lock kube-system/kube-controller-manager: Get "https://192.168.1.91:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=10s": dial tcp 192.168.1.91:6443: connect: connection refused | |
E1228 06:22:07.268310 1 leaderelection.go:325] error retrieving resource lock kube-system/kube-controller-manager: Get "https://192.168.1.91:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=10s": context deadline exceeded | |
E1228 06:22:19.551431 1 leaderelection.go:325] error retrieving resource lock kube-system/kube-controller-manager: Get "https://192.168.1.91:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=10s": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) | |
I1228 06:22:38.249023 1 leaderelection.go:253] successfully acquired lease kube-system/kube-controller-manager | |
I1228 06:22:38.249325 1 event.go:291] "Event occurred" object="kube-system/kube-controller-manager" kind="Lease" apiVersion="coordination.k8s.io/v1" type="Normal" reason="LeaderElection" message="mc1_7465adbc-a10c-449b-99a9-63b49d1e1c46 became leader" | |
I1228 06:22:38.907935 1 shared_informer.go:240] Waiting for caches to sync for tokens | |
I1228 06:22:38.951492 1 node_ipam_controller.go:91] Sending events to api server. | |
I1228 06:22:39.008571 1 shared_informer.go:247] Caches are synced for tokens | |
I1228 06:22:49.378766 1 range_allocator.go:82] Sending events to api server. | |
I1228 06:22:49.379587 1 range_allocator.go:116] No Secondary Service CIDR provided. Skipping filtering out secondary service addresses. | |
I1228 06:22:49.380187 1 controllermanager.go:554] Started "nodeipam" | |
I1228 06:22:49.380439 1 node_ipam_controller.go:159] Starting ipam controller | |
I1228 06:22:49.380545 1 shared_informer.go:240] Waiting for caches to sync for node | |
I1228 06:22:49.396402 1 controllermanager.go:554] Started "pv-protection" | |
I1228 06:22:49.396594 1 pv_protection_controller.go:83] Starting PV protection controller | |
I1228 06:22:49.396755 1 shared_informer.go:240] Waiting for caches to sync for PV protection | |
I1228 06:22:49.417618 1 controllermanager.go:554] Started "root-ca-cert-publisher" | |
I1228 06:22:49.417734 1 publisher.go:98] Starting root CA certificate configmap publisher | |
I1228 06:22:49.417778 1 shared_informer.go:240] Waiting for caches to sync for crt configmap | |
I1228 06:22:49.434854 1 controllermanager.go:554] Started "endpointslicemirroring" | |
I1228 06:22:49.435001 1 endpointslicemirroring_controller.go:211] Starting EndpointSliceMirroring controller | |
I1228 06:22:49.435039 1 shared_informer.go:240] Waiting for caches to sync for endpoint_slice_mirroring | |
I1228 06:22:49.458334 1 garbagecollector.go:142] Starting garbage collector controller | |
I1228 06:22:49.458451 1 shared_informer.go:240] Waiting for caches to sync for garbage collector | |
I1228 06:22:49.458614 1 graph_builder.go:289] GraphBuilder running | |
I1228 06:22:49.459184 1 controllermanager.go:554] Started "garbagecollector" | |
I1228 06:22:49.471890 1 controllermanager.go:554] Started "daemonset" | |
I1228 06:22:49.472008 1 daemon_controller.go:285] Starting daemon sets controller | |
I1228 06:22:49.472050 1 shared_informer.go:240] Waiting for caches to sync for daemon sets | |
I1228 06:22:49.481055 1 controllermanager.go:554] Started "job" | |
I1228 06:22:49.481137 1 job_controller.go:148] Starting job controller | |
I1228 06:22:49.481216 1 shared_informer.go:240] Waiting for caches to sync for job | |
I1228 06:22:49.489598 1 controllermanager.go:554] Started "deployment" | |
I1228 06:22:49.489652 1 deployment_controller.go:153] Starting deployment controller | |
I1228 06:22:49.489681 1 shared_informer.go:240] Waiting for caches to sync for deployment | |
I1228 06:22:49.499501 1 controllermanager.go:554] Started "persistentvolume-expander" | |
I1228 06:22:49.499637 1 expand_controller.go:310] Starting expand controller | |
I1228 06:22:49.499678 1 shared_informer.go:240] Waiting for caches to sync for expand | |
I1228 06:22:49.509598 1 controllermanager.go:554] Started "pvc-protection" | |
I1228 06:22:49.509706 1 pvc_protection_controller.go:110] Starting PVC protection controller | |
I1228 06:22:49.509753 1 shared_informer.go:240] Waiting for caches to sync for PVC protection | |
I1228 06:22:49.518324 1 controllermanager.go:554] Started "endpoint" | |
I1228 06:22:49.518560 1 endpoints_controller.go:184] Starting endpoint controller | |
I1228 06:22:49.518600 1 shared_informer.go:240] Waiting for caches to sync for endpoint | |
I1228 06:22:49.533889 1 controllermanager.go:554] Started "serviceaccount" | |
I1228 06:22:49.533991 1 serviceaccounts_controller.go:117] Starting service account controller | |
I1228 06:22:49.534041 1 shared_informer.go:240] Waiting for caches to sync for service account | |
I1228 06:22:49.591382 1 controllermanager.go:554] Started "horizontalpodautoscaling" | |
I1228 06:22:49.591436 1 horizontal.go:169] Starting HPA controller | |
I1228 06:22:49.591474 1 shared_informer.go:240] Waiting for caches to sync for HPA | |
I1228 06:22:49.601142 1 controllermanager.go:554] Started "cronjob" | |
I1228 06:22:49.601222 1 cronjob_controller.go:96] Starting CronJob Manager | |
I1228 06:22:49.609970 1 node_lifecycle_controller.go:77] Sending events to api server | |
E1228 06:22:49.610068 1 core.go:232] failed to start cloud node lifecycle controller: no cloud provider provided | |
W1228 06:22:49.610111 1 controllermanager.go:546] Skipping "cloud-node-lifecycle" | |
I1228 06:22:49.622224 1 controllermanager.go:554] Started "endpointslice" | |
I1228 06:22:49.622292 1 endpointslice_controller.go:237] Starting endpoint slice controller | |
I1228 06:22:49.622370 1 shared_informer.go:240] Waiting for caches to sync for endpoint_slice | |
I1228 06:22:49.633626 1 controllermanager.go:554] Started "tokencleaner" | |
I1228 06:22:49.633770 1 tokencleaner.go:118] Starting token cleaner controller | |
I1228 06:22:49.633817 1 shared_informer.go:240] Waiting for caches to sync for token_cleaner | |
I1228 06:22:49.633852 1 shared_informer.go:247] Caches are synced for token_cleaner | |
I1228 06:22:49.643082 1 controllermanager.go:554] Started "attachdetach" | |
I1228 06:22:49.643354 1 attach_detach_controller.go:328] Starting attach detach controller | |
I1228 06:22:49.643441 1 shared_informer.go:240] Waiting for caches to sync for attach detach | |
I1228 06:22:49.652648 1 controllermanager.go:554] Started "clusterrole-aggregation" | |
I1228 06:22:49.652753 1 clusterroleaggregation_controller.go:149] Starting ClusterRoleAggregator | |
I1228 06:22:49.652803 1 shared_informer.go:240] Waiting for caches to sync for ClusterRoleAggregator | |
I1228 06:22:50.179430 1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for serviceaccounts | |
I1228 06:22:50.180154 1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for ingresses.extensions | |
I1228 06:22:50.180602 1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for roles.rbac.authorization.k8s.io | |
I1228 06:22:50.180948 1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for controllerrevisions.apps | |
I1228 06:22:50.181270 1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for replicasets.apps | |
I1228 06:22:50.181656 1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for endpointslices.discovery.k8s.io | |
I1228 06:22:50.182367 1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for limitranges | |
I1228 06:22:50.184478 1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for daemonsets.apps | |
I1228 06:22:50.185147 1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for cronjobs.batch | |
I1228 06:22:50.185760 1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for ingresses.networking.k8s.io | |
I1228 06:22:50.186386 1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for poddisruptionbudgets.policy | |
I1228 06:22:50.187289 1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for events.events.k8s.io | |
I1228 06:22:50.187740 1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for rolebindings.rbac.authorization.k8s.io | |
I1228 06:22:50.188285 1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for deployments.apps | |
I1228 06:22:50.188617 1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for horizontalpodautoscalers.autoscaling | |
I1228 06:22:50.188936 1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for jobs.batch | |
I1228 06:22:50.189318 1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for leases.coordination.k8s.io | |
I1228 06:22:50.189851 1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for endpoints | |
I1228 06:22:50.190304 1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for podtemplates | |
I1228 06:22:50.190746 1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for networkpolicies.networking.k8s.io | |
I1228 06:22:50.191205 1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for statefulsets.apps | |
I1228 06:22:50.191431 1 controllermanager.go:554] Started "resourcequota" | |
I1228 06:22:50.191557 1 resource_quota_controller.go:273] Starting resource quota controller | |
I1228 06:22:50.191677 1 shared_informer.go:240] Waiting for caches to sync for resource quota | |
I1228 06:22:50.191757 1 resource_quota_monitor.go:304] QuotaMonitor running | |
I1228 06:22:50.224295 1 certificate_controller.go:118] Starting certificate controller "csrsigning-kubelet-serving" | |
I1228 06:22:50.224361 1 shared_informer.go:240] Waiting for caches to sync for certificate-csrsigning-kubelet-serving | |
I1228 06:22:50.224403 1 dynamic_serving_content.go:130] Starting csr-controller::/etc/kubernetes/pki/ca.crt::/etc/kubernetes/pki/ca.key | |
I1228 06:22:50.228008 1 certificate_controller.go:118] Starting certificate controller "csrsigning-kubelet-client" | |
I1228 06:22:50.228085 1 shared_informer.go:240] Waiting for caches to sync for certificate-csrsigning-kubelet-client | |
I1228 06:22:50.228076 1 dynamic_serving_content.go:130] Starting csr-controller::/etc/kubernetes/pki/ca.crt::/etc/kubernetes/pki/ca.key | |
I1228 06:22:50.231495 1 certificate_controller.go:118] Starting certificate controller "csrsigning-kube-apiserver-client" | |
I1228 06:22:50.231538 1 shared_informer.go:240] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client | |
I1228 06:22:50.231620 1 dynamic_serving_content.go:130] Starting csr-controller::/etc/kubernetes/pki/ca.crt::/etc/kubernetes/pki/ca.key | |
I1228 06:22:50.234913 1 controllermanager.go:554] Started "csrsigning" | |
I1228 06:22:50.235004 1 certificate_controller.go:118] Starting certificate controller "csrsigning-legacy-unknown" | |
I1228 06:22:50.235111 1 shared_informer.go:240] Waiting for caches to sync for certificate-csrsigning-legacy-unknown | |
I1228 06:22:50.235154 1 dynamic_serving_content.go:130] Starting csr-controller::/etc/kubernetes/pki/ca.crt::/etc/kubernetes/pki/ca.key | |
I1228 06:22:50.247712 1 controllermanager.go:554] Started "csrcleaner" | |
I1228 06:22:50.247859 1 cleaner.go:82] Starting CSR cleaner controller | |
I1228 06:22:50.258343 1 controllermanager.go:554] Started "podgc" | |
I1228 06:22:50.258440 1 gc_controller.go:89] Starting GC controller | |
I1228 06:22:50.258497 1 shared_informer.go:240] Waiting for caches to sync for GC | |
I1228 06:22:50.351363 1 controllermanager.go:554] Started "namespace" | |
I1228 06:22:50.351510 1 namespace_controller.go:200] Starting namespace controller | |
I1228 06:22:50.351571 1 shared_informer.go:240] Waiting for caches to sync for namespace | |
I1228 06:22:50.360480 1 controllermanager.go:554] Started "ttl" | |
W1228 06:22:50.360578 1 controllermanager.go:546] Skipping "ephemeral-volume" | |
W1228 06:22:50.360637 1 controllermanager.go:546] Skipping "ttl-after-finished" | |
I1228 06:22:50.360642 1 ttl_controller.go:121] Starting TTL controller | |
I1228 06:22:50.360724 1 shared_informer.go:240] Waiting for caches to sync for TTL | |
I1228 06:22:50.379875 1 controllermanager.go:554] Started "disruption" | |
I1228 06:22:50.380038 1 disruption.go:331] Starting disruption controller | |
I1228 06:22:50.380104 1 shared_informer.go:240] Waiting for caches to sync for disruption | |
I1228 06:22:50.393950 1 controllermanager.go:554] Started "csrapproving" | |
I1228 06:22:50.394027 1 certificate_controller.go:118] Starting certificate controller "csrapproving" | |
I1228 06:22:50.394076 1 shared_informer.go:240] Waiting for caches to sync for certificate-csrapproving | |
I1228 06:22:50.404939 1 node_lifecycle_controller.go:380] Sending events to api server. | |
I1228 06:22:50.406229 1 taint_manager.go:163] Sending events to api server. | |
I1228 06:22:50.406921 1 node_lifecycle_controller.go:508] Controller will reconcile labels. | |
I1228 06:22:50.407163 1 controllermanager.go:554] Started "nodelifecycle" | |
W1228 06:22:50.407270 1 core.go:246] configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes. | |
I1228 06:22:50.407304 1 node_lifecycle_controller.go:542] Starting node controller | |
W1228 06:22:50.407333 1 controllermanager.go:546] Skipping "route" | |
I1228 06:22:50.407376 1 shared_informer.go:240] Waiting for caches to sync for taint | |
I1228 06:22:50.422010 1 controllermanager.go:554] Started "persistentvolume-binder" | |
I1228 06:22:50.423144 1 pv_controller_base.go:307] Starting persistent volume controller | |
I1228 06:22:50.423223 1 shared_informer.go:240] Waiting for caches to sync for persistent volume | |
I1228 06:22:50.439539 1 controllermanager.go:554] Started "replicaset" | |
I1228 06:22:50.439712 1 replica_set.go:182] Starting replicaset controller | |
I1228 06:22:50.439752 1 shared_informer.go:240] Waiting for caches to sync for ReplicaSet | |
I1228 06:22:50.452973 1 controllermanager.go:554] Started "statefulset" | |
I1228 06:22:50.453042 1 stateful_set.go:146] Starting stateful set controller | |
I1228 06:22:50.453082 1 shared_informer.go:240] Waiting for caches to sync for stateful set | |
I1228 06:22:50.466658 1 controllermanager.go:554] Started "bootstrapsigner" | |
I1228 06:22:50.466812 1 shared_informer.go:240] Waiting for caches to sync for bootstrap_signer | |
I1228 06:22:50.479917 1 controllermanager.go:554] Started "replicationcontroller" | |
I1228 06:22:50.480065 1 replica_set.go:182] Starting replicationcontroller controller | |
I1228 06:22:50.480123 1 shared_informer.go:240] Waiting for caches to sync for ReplicationController | |
E1228 06:22:50.490416 1 core.go:92] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail | |
W1228 06:22:50.490521 1 controllermanager.go:546] Skipping "service" | |
I1228 06:22:50.531833 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kube-apiserver-client | |
I1228 06:22:50.535335 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-legacy-unknown | |
W1228 06:22:50.540081 1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="mc4" does not exist | |
W1228 06:22:50.541354 1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="mc1" does not exist | |
W1228 06:22:50.541481 1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="mc2" does not exist | |
W1228 06:22:50.542238 1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="mc3" does not exist | |
I1228 06:22:50.552852 1 shared_informer.go:247] Caches are synced for namespace | |
I1228 06:22:50.553221 1 shared_informer.go:247] Caches are synced for ClusterRoleAggregator | |
I1228 06:22:50.554535 1 shared_informer.go:247] Caches are synced for stateful set | |
I1228 06:22:50.559013 1 shared_informer.go:247] Caches are synced for GC | |
I1228 06:22:50.561065 1 shared_informer.go:247] Caches are synced for TTL | |
I1228 06:22:50.567096 1 shared_informer.go:247] Caches are synced for bootstrap_signer | |
I1228 06:22:50.573353 1 shared_informer.go:247] Caches are synced for daemon sets | |
I1228 06:22:50.580695 1 shared_informer.go:247] Caches are synced for disruption | |
I1228 06:22:50.580767 1 disruption.go:339] Sending events to api server. | |
I1228 06:22:50.581165 1 shared_informer.go:247] Caches are synced for node | |
I1228 06:22:50.581233 1 shared_informer.go:247] Caches are synced for ReplicationController | |
I1228 06:22:50.581281 1 range_allocator.go:172] Starting range CIDR allocator | |
I1228 06:22:50.581332 1 shared_informer.go:240] Waiting for caches to sync for cidrallocator | |
I1228 06:22:50.581385 1 shared_informer.go:247] Caches are synced for cidrallocator | |
I1228 06:22:50.581401 1 shared_informer.go:247] Caches are synced for job | |
I1228 06:22:50.590788 1 shared_informer.go:247] Caches are synced for deployment | |
I1228 06:22:50.595077 1 shared_informer.go:247] Caches are synced for HPA | |
I1228 06:22:50.599734 1 shared_informer.go:247] Caches are synced for PV protection | |
I1228 06:22:50.600557 1 shared_informer.go:247] Caches are synced for certificate-csrapproving | |
I1228 06:22:50.600593 1 shared_informer.go:247] Caches are synced for expand | |
I1228 06:22:50.610063 1 shared_informer.go:247] Caches are synced for PVC protection | |
I1228 06:22:50.613827 1 shared_informer.go:247] Caches are synced for taint | |
I1228 06:22:50.614238 1 node_lifecycle_controller.go:1429] Initializing eviction metric for zone: | |
I1228 06:22:50.614796 1 taint_manager.go:187] Starting NoExecuteTaintManager | |
W1228 06:22:50.615131 1 node_lifecycle_controller.go:1044] Missing timestamp for Node mc2. Assuming now as a timestamp. | |
I1228 06:22:50.616327 1 event.go:291] "Event occurred" object="mc2" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node mc2 event: Registered Node mc2 in Controller" | |
I1228 06:22:50.616666 1 event.go:291] "Event occurred" object="mc3" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node mc3 event: Registered Node mc3 in Controller" | |
I1228 06:22:50.616751 1 event.go:291] "Event occurred" object="mc1" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node mc1 event: Registered Node mc1 in Controller" | |
I1228 06:22:50.616851 1 event.go:291] "Event occurred" object="mc4" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node mc4 event: Registered Node mc4 in Controller" | |
I1228 06:22:50.618352 1 shared_informer.go:247] Caches are synced for crt configmap | |
W1228 06:22:50.619321 1 node_lifecycle_controller.go:1044] Missing timestamp for Node mc3. Assuming now as a timestamp. | |
I1228 06:22:50.621775 1 shared_informer.go:247] Caches are synced for endpoint | |
W1228 06:22:50.621973 1 node_lifecycle_controller.go:1044] Missing timestamp for Node mc4. Assuming now as a timestamp. | |
W1228 06:22:50.622243 1 node_lifecycle_controller.go:1044] Missing timestamp for Node mc1. Assuming now as a timestamp. | |
I1228 06:22:50.623149 1 node_lifecycle_controller.go:1245] Controller detected that zone is now in state Normal. | |
I1228 06:22:50.623525 1 shared_informer.go:247] Caches are synced for persistent volume | |
I1228 06:22:50.624596 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-serving | |
I1228 06:22:50.628839 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-client | |
I1228 06:22:50.634574 1 shared_informer.go:247] Caches are synced for service account | |
I1228 06:22:50.640433 1 shared_informer.go:247] Caches are synced for ReplicaSet | |
I1228 06:22:50.644221 1 shared_informer.go:247] Caches are synced for attach detach | |
I1228 06:22:50.722721 1 shared_informer.go:247] Caches are synced for endpoint_slice | |
I1228 06:22:50.735389 1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring | |
I1228 06:22:50.792014 1 shared_informer.go:247] Caches are synced for resource quota | |
I1228 06:22:50.972622 1 shared_informer.go:240] Waiting for caches to sync for garbage collector | |
I1228 06:22:51.258969 1 shared_informer.go:247] Caches are synced for garbage collector | |
I1228 06:22:51.259084 1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage | |
I1228 06:22:51.273271 1 shared_informer.go:247] Caches are synced for garbage collector | |
I1228 06:22:51.316026 1 request.go:655] Throttling request took 1.04542717s, request: GET:https://192.168.1.91:6443/apis/certificates.k8s.io/v1?timeout=32s | |
I1228 06:22:52.121510 1 shared_informer.go:240] Waiting for caches to sync for resource quota | |
I1228 06:22:52.121643 1 shared_informer.go:247] Caches are synced for resource quota | |
E1228 06:25:48.779986 1 leaderelection.go:325] error retrieving resource lock kube-system/kube-controller-manager: Get "https://192.168.1.91:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=10s": context deadline exceeded | |
I1228 06:25:48.780319 1 leaderelection.go:278] failed to renew lease kube-system/kube-controller-manager: timed out waiting for the condition | |
I1228 06:25:48.780816 1 event.go:291] "Event occurred" object="" kind="Lease" apiVersion="coordination.k8s.io/v1" type="Normal" reason="LeaderElection" message="mc1_7465adbc-a10c-449b-99a9-63b49d1e1c46 stopped leading" | |
I1228 06:25:48.781287 1 node_lifecycle_controller.go:589] Shutting down node controller | |
I1228 06:25:48.781432 1 resource_quota_controller.go:292] Shutting down resource quota controller | |
I1228 06:25:48.781789 1 endpointslicemirroring_controller.go:224] Shutting down EndpointSliceMirroring controller | |
I1228 06:25:48.781916 1 endpointslice_controller.go:253] Shutting down endpoint slice controller | |
I1228 06:25:48.781965 1 garbagecollector.go:160] Shutting down garbage collector controller | |
I1228 06:25:48.782280 1 attach_detach_controller.go:367] Shutting down attach detach controller | |
I1228 06:25:48.782396 1 replica_set.go:194] Shutting down replicaset controller | |
I1228 06:25:48.782516 1 serviceaccounts_controller.go:129] Shutting down service account controller | |
I1228 06:25:48.782609 1 certificate_controller.go:130] Shutting down certificate controller "csrsigning-kubelet-client" | |
I1228 06:25:48.782713 1 certificate_controller.go:130] Shutting down certificate controller "csrsigning-kubelet-serving" | |
I1228 06:25:48.782803 1 pv_controller_base.go:323] Shutting down persistent volume controller | |
I1228 06:25:48.782882 1 pv_controller_base.go:517] claim worker queue shutting down | |
I1228 06:25:48.782954 1 resource_quota_controller.go:261] resource quota controller worker shutting down | |
I1228 06:25:48.783074 1 endpoints_controller.go:201] Shutting down endpoint controller | |
I1228 06:25:48.783143 1 resource_quota_controller.go:261] resource quota controller worker shutting down | |
I1228 06:25:48.783210 1 resource_quota_controller.go:261] resource quota controller worker shutting down | |
I1228 06:25:48.783272 1 resource_quota_controller.go:261] resource quota controller worker shutting down | |
I1228 06:25:48.783323 1 job_controller.go:160] Shutting down job controller | |
I1228 06:25:48.783434 1 replica_set.go:194] Shutting down replicationcontroller controller | |
I1228 06:25:48.783583 1 node_ipam_controller.go:171] Shutting down ipam controller | |
I1228 06:25:48.783767 1 pv_controller_base.go:460] volume worker queue shutting down | |
I1228 06:25:48.783901 1 disruption.go:348] Shutting down disruption controller | |
I1228 06:25:48.783993 1 daemon_controller.go:299] Shutting down daemon sets controller | |
I1228 06:25:48.784895 1 expand_controller.go:322] Shutting down expand controller | |
I1228 06:25:48.785061 1 pvc_protection_controller.go:122] Shutting down PVC protection controller | |
I1228 06:25:48.785219 1 ttl_controller.go:133] Shutting down TTL controller | |
I1228 06:25:48.785307 1 certificate_controller.go:130] Shutting down certificate controller "csrapproving" | |
I1228 06:25:48.785419 1 range_allocator.go:184] Shutting down range CIDR allocator | |
I1228 06:25:48.785465 1 certificate_controller.go:130] Shutting down certificate controller "csrsigning-legacy-unknown" | |
I1228 06:25:48.785576 1 graph_builder.go:317] stopped 48 of 48 monitors | |
I1228 06:25:48.785615 1 graph_builder.go:318] GraphBuilder stopping | |
I1228 06:25:48.785848 1 certificate_controller.go:130] Shutting down certificate controller "csrsigning-kube-apiserver-client" | |
I1228 06:25:48.786205 1 cleaner.go:90] Shutting down CSR cleaner controller | |
I1228 06:25:48.786312 1 dynamic_serving_content.go:145] Shutting down csr-controller::/etc/kubernetes/pki/ca.crt::/etc/kubernetes/pki/ca.key | |
I1228 06:25:48.786587 1 gc_controller.go:100] Shutting down GC controller | |
I1228 06:25:48.786674 1 stateful_set.go:158] Shutting down statefulset controller | |
I1228 06:25:48.786770 1 clusterroleaggregation_controller.go:161] Shutting down ClusterRoleAggregator | |
I1228 06:25:48.786860 1 namespace_controller.go:212] Shutting down namespace controller | |
I1228 06:25:48.787191 1 horizontal.go:180] Shutting down HPA controller | |
I1228 06:25:48.787453 1 pv_protection_controller.go:95] Shutting down PV protection controller | |
I1228 06:25:48.787491 1 dynamic_serving_content.go:145] Shutting down csr-controller::/etc/kubernetes/pki/ca.crt::/etc/kubernetes/pki/ca.key | |
I1228 06:25:48.787258 1 publisher.go:110] Shutting down root CA certificate configmap publisher | |
I1228 06:25:48.787634 1 horizontal.go:215] horizontal pod autoscaler controller worker shutting down | |
I1228 06:25:48.787653 1 dynamic_serving_content.go:145] Shutting down csr-controller::/etc/kubernetes/pki/ca.crt::/etc/kubernetes/pki/ca.key | |
F1228 06:25:48.783015 1 controllermanager.go:294] leaderelection lost | |
goroutine 1 [running]: | |
k8s.io/kubernetes/vendor/k8s.io/klog/v2.stacks(0x56c0b01, 0x0, 0x4c, 0x93) | |
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1026 +0x94 | |
k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).output(0x56ac8e0, 0x3, 0x0, 0x0, 0x7280420, 0x554cf14, 0x14, 0x126, 0x0) | |
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:975 +0x110 | |
k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).printf(0x56ac8e0, 0x3, 0x0, 0x0, 0x0, 0x0, 0x34a2738, 0x13, 0x0, 0x0, ...) | |
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:750 +0x130 | |
k8s.io/kubernetes/vendor/k8s.io/klog/v2.Fatalf(...) | |
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1502 | |
k8s.io/kubernetes/cmd/kube-controller-manager/app.Run.func2() | |
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kube-controller-manager/app/controllermanager.go:294 +0x78 | |
k8s.io/kubernetes/vendor/k8s.io/client-go/tools/leaderelection.(*LeaderElector).Run.func1(0x5abc2c0) | |
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/leaderelection/leaderelection.go:199 +0x20 | |
k8s.io/kubernetes/vendor/k8s.io/client-go/tools/leaderelection.(*LeaderElector).Run(0x5abc2c0, 0x3a8a648, 0x6289920) | |
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/leaderelection/leaderelection.go:209 +0x11c | |
k8s.io/kubernetes/vendor/k8s.io/client-go/tools/leaderelection.RunOrDie(0x3a8a668, 0x584602c, 0x3a925e8, 0x5abc210, 0x7e11d600, 0x3, 0x540be400, 0x2, 0x77359400, 0x0, ...) | |
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/leaderelection/leaderelection.go:222 +0x58 | |
k8s.io/kubernetes/cmd/kube-controller-manager/app.Run(0x5b0eb48, 0x587a080, 0x5a6cbdc, 0x5a31018) | |
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kube-controller-manager/app/controllermanager.go:286 +0x800 | |
k8s.io/kubernetes/cmd/kube-controller-manager/app.NewControllerManagerCommand.func2(0x59ec2c0, 0x5d75a70, 0x0, 0x12) | |
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kube-controller-manager/app/controllermanager.go:125 +0x1c4 | |
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute(0x59ec2c0, 0x58400a8, 0x12, 0x13, 0x59ec2c0, 0x58400a8) | |
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:854 +0x1e8 | |
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0x59ec2c0, 0xb4f374fa, 0x1654cd00, 0x0) | |
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:958 +0x274 | |
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute(...) | |
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:895 | |
main.main() | |
_output/dockerized/go/src/k8s.io/kubernetes/cmd/kube-controller-manager/controller-manager.go:46 +0xe0 | |
(a lot of similar debugging data follows) |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
> dpkg -l|grep kube | |
ii kubeadm 1.20.1-00 armhf Kubernetes Cluster Bootstrapping Tool | |
ii kubectl 1.20.1-00 armhf Kubernetes Command Line Tool | |
ii kubectx 0.6.2-1 all Fast way to switch between clusters and namespaces in kubectl | |
ii kubelet 1.20.1-00 armhf Kubernetes Node Agent | |
ii kubernetes-cni 0.8.7-00 armhf Kubernetes CNI | |
> dpkg -l|grep docker | |
ii docker-ce 5:20.10.1~3-0~debian-buster armhf Docker: the open-source application container engine | |
ii docker-ce-cli 5:20.10.1~3-0~debian-buster armhf Docker CLI: the open-source application container engine | |
> uname -a | |
Linux mc1 5.9.14-odroidxu4 #20.11.3 SMP PREEMPT Fri Dec 11 21:37:36 CET 2020 armv7l GNU/Linux | |
> grep memtotal /proc/meminfo | |
MemTotal: 2040036 kB | |
> cat /proc/cpuinfo|head | |
processor : 0 | |
model name : ARMv7 Processor rev 3 (v7l) | |
BogoMIPS : 84.00 | |
Features : half thumb fastmult vfp edsp neon vfpv3 tls vfpv4 idiva idivt vfpd32 lpae evtstrm | |
CPU implementer : 0x41 | |
CPU architecture: 7 | |
CPU variant : 0x0 | |
CPU part : 0xc07 | |
CPU revision : 3 | |
(... 8 cores) | |
> cat /etc/issue.net | |
Armbian 20.11.3 Buster | |
# | |
# The following commands might run instantly, fail (api server dead) or take a couple of minutes to produce an output... | |
# | |
> kubectl get pods -o wide --all-namespaces | |
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES | |
kube-system coredns-74ff55c5b-6hczq 1/1 Running 2 20h 10.10.0.6 mc1 <none> <none> | |
kube-system coredns-74ff55c5b-xm8rz 1/1 Running 2 20h 10.10.0.7 mc1 <none> <none> | |
kube-system etcd-mc1 1/1 Running 2 20h 192.168.1.91 mc1 <none> <none> | |
kube-system kube-apiserver-mc1 1/1 Running 40 20h 192.168.1.91 mc1 <none> <none> | |
kube-system kube-controller-manager-mc1 1/1 Running 9 20h 192.168.1.91 mc1 <none> <none> | |
kube-system kube-flannel-ds-2rz8w 1/1 Running 5 20h 192.168.1.93 mc3 <none> <none> | |
kube-system kube-flannel-ds-5dxt7 1/1 Running 1 20h 192.168.1.92 mc2 <none> <none> | |
kube-system kube-flannel-ds-j9vpp 1/1 Running 5 20h 192.168.1.94 mc4 <none> <none> | |
kube-system kube-flannel-ds-xvfbn 1/1 Running 2 20h 192.168.1.91 mc1 <none> <none> | |
kube-system kube-proxy-jvpx9 1/1 Running 2 20h 192.168.1.91 mc1 <none> <none> | |
kube-system kube-proxy-mq25n 1/1 Running 1 20h 192.168.1.92 mc2 <none> <none> | |
kube-system kube-proxy-pbxt6 1/1 Running 1 20h 192.168.1.93 mc3 <none> <none> | |
kube-system kube-proxy-zswqd 1/1 Running 1 20h 192.168.1.94 mc4 <none> <none> | |
kube-system kube-scheduler-mc1 1/1 Running 9 20h 192.168.1.91 mc1 <none> <none> | |
> kubectl get nodes | |
NAME STATUS ROLES AGE VERSION | |
mc1 Ready control-plane,master 20h v1.20.1 | |
mc2 Ready <none> 20h v1.20.1 | |
mc3 Ready <none> 20h v1.20.1 | |
mc4 Ready <none> 20h v1.20.1 |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment