Created
November 4, 2020 15:19
-
-
Save WillNilges/718a26d0da00991801625e9fb691fc23 to your computer and use it in GitHub Desktop.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
==> Docker <== | |
-- Logs begin at Wed 2020-11-04 14:31:59 UTC, end at Wed 2020-11-04 15:17:03 UTC. -- | |
Nov 04 14:47:28 minikube dockerd[381]: time="2020-11-04T14:47:28.454381299Z" level=warning msg="Error getting v2 registry: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" | |
Nov 04 14:47:28 minikube dockerd[381]: time="2020-11-04T14:47:28.454566995Z" level=info msg="Attempting next endpoint for pull after error: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" | |
Nov 04 14:47:28 minikube dockerd[381]: time="2020-11-04T14:47:28.454782697Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" | |
Nov 04 14:47:33 minikube dockerd[381]: time="2020-11-04T14:47:33.456085078Z" level=warning msg="Error getting v2 registry: Get https://registry-1.docker.io/v2/: dial tcp: lookup registry-1.docker.io on 192.168.49.1:53: read udp 192.168.49.2:41344->192.168.49.1:53: i/o timeout" | |
Nov 04 14:47:33 minikube dockerd[381]: time="2020-11-04T14:47:33.456186476Z" level=info msg="Attempting next endpoint for pull after error: Get https://registry-1.docker.io/v2/: dial tcp: lookup registry-1.docker.io on 192.168.49.1:53: read udp 192.168.49.2:41344->192.168.49.1:53: i/o timeout" | |
Nov 04 14:47:33 minikube dockerd[381]: time="2020-11-04T14:47:33.456262181Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get https://registry-1.docker.io/v2/: dial tcp: lookup registry-1.docker.io on 192.168.49.1:53: read udp 192.168.49.2:41344->192.168.49.1:53: i/o timeout" | |
Nov 04 14:47:57 minikube dockerd[381]: time="2020-11-04T14:47:57.077001959Z" level=warning msg="Error getting v2 registry: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" | |
Nov 04 14:47:57 minikube dockerd[381]: time="2020-11-04T14:47:57.077122673Z" level=info msg="Attempting next endpoint for pull after error: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" | |
Nov 04 14:47:57 minikube dockerd[381]: time="2020-11-04T14:47:57.077209471Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" | |
Nov 04 14:48:02 minikube dockerd[381]: time="2020-11-04T14:48:02.080430579Z" level=warning msg="Error getting v2 registry: Get https://registry-1.docker.io/v2/: dial tcp: lookup registry-1.docker.io on 192.168.49.1:53: read udp 192.168.49.2:50336->192.168.49.1:53: i/o timeout" | |
Nov 04 14:48:02 minikube dockerd[381]: time="2020-11-04T14:48:02.080527888Z" level=info msg="Attempting next endpoint for pull after error: Get https://registry-1.docker.io/v2/: dial tcp: lookup registry-1.docker.io on 192.168.49.1:53: read udp 192.168.49.2:50336->192.168.49.1:53: i/o timeout" | |
Nov 04 14:48:02 minikube dockerd[381]: time="2020-11-04T14:48:02.080630417Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get https://registry-1.docker.io/v2/: dial tcp: lookup registry-1.docker.io on 192.168.49.1:53: read udp 192.168.49.2:50336->192.168.49.1:53: i/o timeout" | |
Nov 04 14:48:36 minikube dockerd[381]: time="2020-11-04T14:48:36.081872245Z" level=warning msg="Error getting v2 registry: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" | |
Nov 04 14:48:36 minikube dockerd[381]: time="2020-11-04T14:48:36.082075093Z" level=info msg="Attempting next endpoint for pull after error: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" | |
Nov 04 14:48:36 minikube dockerd[381]: time="2020-11-04T14:48:36.082346936Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" | |
Nov 04 14:48:41 minikube dockerd[381]: time="2020-11-04T14:48:41.084861963Z" level=warning msg="Error getting v2 registry: Get https://registry-1.docker.io/v2/: dial tcp: lookup registry-1.docker.io on 192.168.49.1:53: read udp 192.168.49.2:51468->192.168.49.1:53: i/o timeout" | |
Nov 04 14:48:41 minikube dockerd[381]: time="2020-11-04T14:48:41.085901920Z" level=info msg="Attempting next endpoint for pull after error: Get https://registry-1.docker.io/v2/: dial tcp: lookup registry-1.docker.io on 192.168.49.1:53: read udp 192.168.49.2:51468->192.168.49.1:53: i/o timeout" | |
Nov 04 14:48:41 minikube dockerd[381]: time="2020-11-04T14:48:41.086081755Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get https://registry-1.docker.io/v2/: dial tcp: lookup registry-1.docker.io on 192.168.49.1:53: read udp 192.168.49.2:51468->192.168.49.1:53: i/o timeout" | |
Nov 04 14:49:33 minikube dockerd[381]: time="2020-11-04T14:49:33.079705580Z" level=warning msg="Error getting v2 registry: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" | |
Nov 04 14:49:33 minikube dockerd[381]: time="2020-11-04T14:49:33.080397992Z" level=info msg="Attempting next endpoint for pull after error: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" | |
Nov 04 14:49:33 minikube dockerd[381]: time="2020-11-04T14:49:33.080586795Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" | |
Nov 04 14:49:38 minikube dockerd[381]: time="2020-11-04T14:49:38.081636431Z" level=warning msg="Error getting v2 registry: Get https://registry-1.docker.io/v2/: dial tcp: lookup registry-1.docker.io on 192.168.49.1:53: read udp 192.168.49.2:37712->192.168.49.1:53: i/o timeout" | |
Nov 04 14:49:38 minikube dockerd[381]: time="2020-11-04T14:49:38.081721973Z" level=info msg="Attempting next endpoint for pull after error: Get https://registry-1.docker.io/v2/: dial tcp: lookup registry-1.docker.io on 192.168.49.1:53: read udp 192.168.49.2:37712->192.168.49.1:53: i/o timeout" | |
Nov 04 14:49:38 minikube dockerd[381]: time="2020-11-04T14:49:38.081779626Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get https://registry-1.docker.io/v2/: dial tcp: lookup registry-1.docker.io on 192.168.49.1:53: read udp 192.168.49.2:37712->192.168.49.1:53: i/o timeout" | |
Nov 04 14:51:17 minikube dockerd[381]: time="2020-11-04T14:51:17.077897543Z" level=warning msg="Error getting v2 registry: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" | |
Nov 04 14:51:17 minikube dockerd[381]: time="2020-11-04T14:51:17.078223194Z" level=info msg="Attempting next endpoint for pull after error: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" | |
Nov 04 14:51:17 minikube dockerd[381]: time="2020-11-04T14:51:17.078733915Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" | |
Nov 04 14:51:22 minikube dockerd[381]: time="2020-11-04T14:51:22.079130052Z" level=warning msg="Error getting v2 registry: Get https://registry-1.docker.io/v2/: dial tcp: lookup registry-1.docker.io on 192.168.49.1:53: read udp 192.168.49.2:48437->192.168.49.1:53: i/o timeout" | |
Nov 04 14:51:22 minikube dockerd[381]: time="2020-11-04T14:51:22.079359971Z" level=info msg="Attempting next endpoint for pull after error: Get https://registry-1.docker.io/v2/: dial tcp: lookup registry-1.docker.io on 192.168.49.1:53: read udp 192.168.49.2:48437->192.168.49.1:53: i/o timeout" | |
Nov 04 14:51:22 minikube dockerd[381]: time="2020-11-04T14:51:22.079565409Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get https://registry-1.docker.io/v2/: dial tcp: lookup registry-1.docker.io on 192.168.49.1:53: read udp 192.168.49.2:48437->192.168.49.1:53: i/o timeout" | |
Nov 04 14:54:19 minikube dockerd[381]: time="2020-11-04T14:54:19.076845759Z" level=warning msg="Error getting v2 registry: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" | |
Nov 04 14:54:19 minikube dockerd[381]: time="2020-11-04T14:54:19.078092849Z" level=info msg="Attempting next endpoint for pull after error: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" | |
Nov 04 14:54:19 minikube dockerd[381]: time="2020-11-04T14:54:19.078363790Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" | |
Nov 04 14:54:24 minikube dockerd[381]: time="2020-11-04T14:54:24.079183464Z" level=warning msg="Error getting v2 registry: Get https://registry-1.docker.io/v2/: dial tcp: lookup registry-1.docker.io on 192.168.49.1:53: read udp 192.168.49.2:55613->192.168.49.1:53: i/o timeout" | |
Nov 04 14:54:24 minikube dockerd[381]: time="2020-11-04T14:54:24.080191682Z" level=info msg="Attempting next endpoint for pull after error: Get https://registry-1.docker.io/v2/: dial tcp: lookup registry-1.docker.io on 192.168.49.1:53: read udp 192.168.49.2:55613->192.168.49.1:53: i/o timeout" | |
Nov 04 14:54:24 minikube dockerd[381]: time="2020-11-04T14:54:24.080407644Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get https://registry-1.docker.io/v2/: dial tcp: lookup registry-1.docker.io on 192.168.49.1:53: read udp 192.168.49.2:55613->192.168.49.1:53: i/o timeout" | |
Nov 04 14:59:35 minikube dockerd[381]: time="2020-11-04T14:59:35.086872114Z" level=warning msg="Error getting v2 registry: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" | |
Nov 04 14:59:35 minikube dockerd[381]: time="2020-11-04T14:59:35.088864576Z" level=info msg="Attempting next endpoint for pull after error: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" | |
Nov 04 14:59:35 minikube dockerd[381]: time="2020-11-04T14:59:35.089270508Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" | |
Nov 04 14:59:40 minikube dockerd[381]: time="2020-11-04T14:59:40.089786812Z" level=warning msg="Error getting v2 registry: Get https://registry-1.docker.io/v2/: dial tcp: lookup registry-1.docker.io on 192.168.49.1:53: read udp 192.168.49.2:51828->192.168.49.1:53: i/o timeout" | |
Nov 04 14:59:40 minikube dockerd[381]: time="2020-11-04T14:59:40.089893654Z" level=info msg="Attempting next endpoint for pull after error: Get https://registry-1.docker.io/v2/: dial tcp: lookup registry-1.docker.io on 192.168.49.1:53: read udp 192.168.49.2:51828->192.168.49.1:53: i/o timeout" | |
Nov 04 14:59:40 minikube dockerd[381]: time="2020-11-04T14:59:40.089967851Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get https://registry-1.docker.io/v2/: dial tcp: lookup registry-1.docker.io on 192.168.49.1:53: read udp 192.168.49.2:51828->192.168.49.1:53: i/o timeout" | |
Nov 04 15:04:56 minikube dockerd[381]: time="2020-11-04T15:04:56.076734506Z" level=warning msg="Error getting v2 registry: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" | |
Nov 04 15:04:56 minikube dockerd[381]: time="2020-11-04T15:04:56.076939621Z" level=info msg="Attempting next endpoint for pull after error: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" | |
Nov 04 15:04:56 minikube dockerd[381]: time="2020-11-04T15:04:56.077168749Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" | |
Nov 04 15:05:01 minikube dockerd[381]: time="2020-11-04T15:05:01.079526529Z" level=warning msg="Error getting v2 registry: Get https://registry-1.docker.io/v2/: dial tcp: lookup registry-1.docker.io on 192.168.49.1:53: read udp 192.168.49.2:35471->192.168.49.1:53: i/o timeout" | |
Nov 04 15:05:01 minikube dockerd[381]: time="2020-11-04T15:05:01.079690392Z" level=info msg="Attempting next endpoint for pull after error: Get https://registry-1.docker.io/v2/: dial tcp: lookup registry-1.docker.io on 192.168.49.1:53: read udp 192.168.49.2:35471->192.168.49.1:53: i/o timeout" | |
Nov 04 15:05:01 minikube dockerd[381]: time="2020-11-04T15:05:01.079867662Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get https://registry-1.docker.io/v2/: dial tcp: lookup registry-1.docker.io on 192.168.49.1:53: read udp 192.168.49.2:35471->192.168.49.1:53: i/o timeout" | |
Nov 04 15:10:17 minikube dockerd[381]: time="2020-11-04T15:10:17.083711201Z" level=warning msg="Error getting v2 registry: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" | |
Nov 04 15:10:17 minikube dockerd[381]: time="2020-11-04T15:10:17.084056702Z" level=info msg="Attempting next endpoint for pull after error: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" | |
Nov 04 15:10:17 minikube dockerd[381]: time="2020-11-04T15:10:17.084458258Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" | |
Nov 04 15:10:22 minikube dockerd[381]: time="2020-11-04T15:10:22.086690710Z" level=warning msg="Error getting v2 registry: Get https://registry-1.docker.io/v2/: dial tcp: lookup registry-1.docker.io on 192.168.49.1:53: read udp 192.168.49.2:51468->192.168.49.1:53: i/o timeout" | |
Nov 04 15:10:22 minikube dockerd[381]: time="2020-11-04T15:10:22.086833731Z" level=info msg="Attempting next endpoint for pull after error: Get https://registry-1.docker.io/v2/: dial tcp: lookup registry-1.docker.io on 192.168.49.1:53: read udp 192.168.49.2:51468->192.168.49.1:53: i/o timeout" | |
Nov 04 15:10:22 minikube dockerd[381]: time="2020-11-04T15:10:22.086972390Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get https://registry-1.docker.io/v2/: dial tcp: lookup registry-1.docker.io on 192.168.49.1:53: read udp 192.168.49.2:51468->192.168.49.1:53: i/o timeout" | |
Nov 04 15:15:36 minikube dockerd[381]: time="2020-11-04T15:15:36.084473479Z" level=warning msg="Error getting v2 registry: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" | |
Nov 04 15:15:36 minikube dockerd[381]: time="2020-11-04T15:15:36.084684900Z" level=info msg="Attempting next endpoint for pull after error: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" | |
Nov 04 15:15:36 minikube dockerd[381]: time="2020-11-04T15:15:36.084926431Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" | |
Nov 04 15:15:41 minikube dockerd[381]: time="2020-11-04T15:15:41.089981880Z" level=warning msg="Error getting v2 registry: Get https://registry-1.docker.io/v2/: dial tcp: lookup registry-1.docker.io on 192.168.49.1:53: read udp 192.168.49.2:39476->192.168.49.1:53: i/o timeout" | |
Nov 04 15:15:41 minikube dockerd[381]: time="2020-11-04T15:15:41.090148754Z" level=info msg="Attempting next endpoint for pull after error: Get https://registry-1.docker.io/v2/: dial tcp: lookup registry-1.docker.io on 192.168.49.1:53: read udp 192.168.49.2:39476->192.168.49.1:53: i/o timeout" | |
Nov 04 15:15:41 minikube dockerd[381]: time="2020-11-04T15:15:41.090268921Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get https://registry-1.docker.io/v2/: dial tcp: lookup registry-1.docker.io on 192.168.49.1:53: read udp 192.168.49.2:39476->192.168.49.1:53: i/o timeout" | |
==> container status <== | |
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID | |
4cdbbad4d9969 bfe3a36ebd252 43 minutes ago Running coredns 0 7c8c2292926d3 | |
d0eb30e789a6d bad58561c4be7 43 minutes ago Running storage-provisioner 0 e0912e06b521a | |
186080efe05c7 d373dd5a8593a 43 minutes ago Running kube-proxy 0 8cd608036a6d7 | |
856aaf0e58bf1 8603821e1a7a5 44 minutes ago Running kube-controller-manager 0 19448d3d24b2f | |
ee494f0871662 607331163122e 44 minutes ago Running kube-apiserver 0 197bc204de450 | |
a22e98d48c5a4 2f32d66b884f8 44 minutes ago Running kube-scheduler 0 99f86099df32f | |
038315f0f3133 0369cf4303ffd 44 minutes ago Running etcd 0 0a093258999c9 | |
==> coredns [4cdbbad4d996] <== | |
.:53 | |
[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7 | |
CoreDNS-1.7.0 | |
linux/amd64, go1.14.4, f59c03d | |
[ERROR] plugin/errors: 2 4013716754494250837.6936409099749549163. HINFO: read udp 172.17.0.4:44787->192.168.49.1:53: i/o timeout | |
[ERROR] plugin/errors: 2 4013716754494250837.6936409099749549163. HINFO: read udp 172.17.0.4:49080->192.168.49.1:53: i/o timeout | |
[ERROR] plugin/errors: 2 4013716754494250837.6936409099749549163. HINFO: read udp 172.17.0.4:44823->192.168.49.1:53: i/o timeout | |
[ERROR] plugin/errors: 2 4013716754494250837.6936409099749549163. HINFO: read udp 172.17.0.4:54562->192.168.49.1:53: i/o timeout | |
[ERROR] plugin/errors: 2 4013716754494250837.6936409099749549163. HINFO: read udp 172.17.0.4:44664->192.168.49.1:53: i/o timeout | |
[ERROR] plugin/errors: 2 4013716754494250837.6936409099749549163. HINFO: read udp 172.17.0.4:51615->192.168.49.1:53: i/o timeout | |
[ERROR] plugin/errors: 2 4013716754494250837.6936409099749549163. HINFO: read udp 172.17.0.4:48634->192.168.49.1:53: i/o timeout | |
[ERROR] plugin/errors: 2 4013716754494250837.6936409099749549163. HINFO: read udp 172.17.0.4:38909->192.168.49.1:53: i/o timeout | |
[ERROR] plugin/errors: 2 4013716754494250837.6936409099749549163. HINFO: read udp 172.17.0.4:60404->192.168.49.1:53: i/o timeout | |
[ERROR] plugin/errors: 2 4013716754494250837.6936409099749549163. HINFO: read udp 172.17.0.4:42609->192.168.49.1:53: i/o timeout | |
==> describe nodes <== | |
Name: minikube | |
Roles: master | |
Labels: beta.kubernetes.io/arch=amd64 | |
beta.kubernetes.io/os=linux | |
kubernetes.io/arch=amd64 | |
kubernetes.io/hostname=minikube | |
kubernetes.io/os=linux | |
minikube.k8s.io/commit=2c82918e2347188e21c4e44c8056fc80408bce10 | |
minikube.k8s.io/name=minikube | |
minikube.k8s.io/updated_at=2020_11_04T09_33_01_0700 | |
minikube.k8s.io/version=v1.14.2 | |
node-role.kubernetes.io/master= | |
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock | |
node.alpha.kubernetes.io/ttl: 0 | |
volumes.kubernetes.io/controller-managed-attach-detach: true | |
CreationTimestamp: Wed, 04 Nov 2020 14:32:57 +0000 | |
Taints: <none> | |
Unschedulable: false | |
Lease: | |
HolderIdentity: minikube | |
AcquireTime: <unset> | |
RenewTime: Wed, 04 Nov 2020 15:17:00 +0000 | |
Conditions: | |
Type Status LastHeartbeatTime LastTransitionTime Reason Message | |
---- ------ ----------------- ------------------ ------ ------- | |
MemoryPressure False Wed, 04 Nov 2020 15:13:32 +0000 Wed, 04 Nov 2020 14:32:48 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available | |
DiskPressure False Wed, 04 Nov 2020 15:13:32 +0000 Wed, 04 Nov 2020 14:32:48 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure | |
PIDPressure False Wed, 04 Nov 2020 15:13:32 +0000 Wed, 04 Nov 2020 14:32:48 +0000 KubeletHasSufficientPID kubelet has sufficient PID available | |
Ready True Wed, 04 Nov 2020 15:13:32 +0000 Wed, 04 Nov 2020 14:33:18 +0000 KubeletReady kubelet is posting ready status | |
Addresses: | |
InternalIP: 192.168.49.2 | |
Hostname: minikube | |
Capacity: | |
cpu: 4 | |
ephemeral-storage: 15350Mi | |
hugepages-2Mi: 0 | |
memory: 8144428Ki | |
pods: 110 | |
Allocatable: | |
cpu: 4 | |
ephemeral-storage: 15350Mi | |
hugepages-2Mi: 0 | |
memory: 8144428Ki | |
pods: 110 | |
System Info: | |
Machine ID: 07495bd4fa5748b098f9fdfc415db357 | |
System UUID: 408073a4-50cd-4a2d-aaee-8b042d97e6ea | |
Boot ID: 6f23a0a1-f786-438f-ab3f-e54e6f965b59 | |
Kernel Version: 5.6.6-300.fc32.x86_64 | |
OS Image: Ubuntu 20.04 LTS | |
Operating System: linux | |
Architecture: amd64 | |
Container Runtime Version: docker://19.3.8 | |
Kubelet Version: v1.19.2 | |
Kube-Proxy Version: v1.19.2 | |
Non-terminated Pods: (10 in total) | |
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE | |
--------- ---- ------------ ---------- --------------- ------------- --- | |
kube-system coredns-f9fd979d6-j29xl 100m (2%) 0 (0%) 70Mi (0%) 170Mi (2%) 43m | |
kube-system etcd-minikube 0 (0%) 0 (0%) 0 (0%) 0 (0%) 43m | |
kube-system ingress-nginx-admission-create-tr846 0 (0%) 0 (0%) 0 (0%) 0 (0%) 29m | |
kube-system ingress-nginx-admission-patch-k74gx 0 (0%) 0 (0%) 0 (0%) 0 (0%) 29m | |
kube-system ingress-nginx-controller-799c9469f7-2j2mk 100m (2%) 0 (0%) 90Mi (1%) 0 (0%) 30m | |
kube-system kube-apiserver-minikube 250m (6%) 0 (0%) 0 (0%) 0 (0%) 43m | |
kube-system kube-controller-manager-minikube 200m (5%) 0 (0%) 0 (0%) 0 (0%) 43m | |
kube-system kube-proxy-wsc4c 0 (0%) 0 (0%) 0 (0%) 0 (0%) 43m | |
kube-system kube-scheduler-minikube 100m (2%) 0 (0%) 0 (0%) 0 (0%) 43m | |
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 44m | |
Allocated resources: | |
(Total limits may be over 100 percent, i.e., overcommitted.) | |
Resource Requests Limits | |
-------- -------- ------ | |
cpu 750m (18%) 0 (0%) | |
memory 160Mi (2%) 170Mi (2%) | |
ephemeral-storage 0 (0%) 0 (0%) | |
hugepages-2Mi 0 (0%) 0 (0%) | |
Events: | |
Type Reason Age From Message | |
---- ------ ---- ---- ------- | |
Normal NodeHasSufficientMemory 44m (x5 over 44m) kubelet Node minikube status is now: NodeHasSufficientMemory | |
Normal NodeHasNoDiskPressure 44m (x5 over 44m) kubelet Node minikube status is now: NodeHasNoDiskPressure | |
Normal NodeHasSufficientPID 44m (x5 over 44m) kubelet Node minikube status is now: NodeHasSufficientPID | |
Normal Starting 43m kubelet Starting kubelet. | |
Normal NodeHasSufficientMemory 43m kubelet Node minikube status is now: NodeHasSufficientMemory | |
Normal NodeHasNoDiskPressure 43m kubelet Node minikube status is now: NodeHasNoDiskPressure | |
Normal NodeHasSufficientPID 43m kubelet Node minikube status is now: NodeHasSufficientPID | |
Normal NodeAllocatableEnforced 43m kubelet Updated Node Allocatable limit across pods | |
Normal Starting 43m kube-proxy Starting kube-proxy. | |
Normal NodeReady 43m kubelet Node minikube status is now: NodeReady | |
==> dmesg <== | |
[Nov 3 19:14] #2 | |
[ +0.012015] #3 | |
[ +0.218842] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. | |
[ +2.490984] sd 2:0:0:0: Power-on or device reset occurred | |
[ +2.410523] kauditd_printk_skb: 36 callbacks suppressed | |
[ +0.066321] xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff) | |
[ +1.371531] xfs filesystem being mounted at /boot supports timestamps until 2038 (0x7fffffff) | |
[Nov 3 19:16] process 'docker/tmp/qemu-check212006879/check' started with executable stack | |
[Nov 3 19:22] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to [email protected] if you depend on this functionality. | |
[Nov 3 19:23] ------------[ cut here ]------------ | |
[ +0.000004] rq->tmp_alone_branch != &rq->leaf_cfs_rq_list | |
[ +0.000034] WARNING: CPU: 3 PID: 0 at kernel/sched/fair.c:380 enqueue_task_fair+0x23b/0x4c0 | |
[ +0.000001] Modules linked in: xt_comment xt_mark xt_nat veth xt_conntrack xt_MASQUERADE nf_conntrack_netlink xt_addrtype br_netfilter bridge stp llc overlay nft_fib_inet nft_fib_ipv4 nft_fib_ipv6 nft_fib rfkill nft_reject_inet nf_reject_ipv4 nf_reject_ipv6 nft_reject nft_ct nf_tables_set nft_chain_nat ip6table_nat ip6table_mangle ip6table_raw ip6table_security iptable_nat nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 iptable_mangle iptable_raw iptable_security ip_set nf_tables nfnetlink ip6table_filter ip6_tables iptable_filter sunrpc bochs_drm drm_vram_helper drm_ttm_helper ttm drm_kms_helper joydev virtio_balloon i2c_piix4 drm ip_tables xfs libcrc32c virtio_net serio_raw net_failover failover virtio_scsi ata_generic pata_acpi qemu_fw_cfg pkcs8_key_parser | |
[ +0.000027] CPU: 3 PID: 0 Comm: swapper/3 Not tainted 5.6.6-300.fc32.x86_64 #1 | |
[ +0.000001] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.13.0-48-gd9c812dda519-prebuilt.qemu.org 04/01/2014 | |
[ +0.000002] RIP: 0010:enqueue_task_fair+0x23b/0x4c0 | |
[ +0.000002] Code: 60 09 00 00 0f 84 fe fe ff ff 80 3d ec 75 6f 01 00 0f 85 f1 fe ff ff 48 c7 c7 08 92 35 83 c6 05 d8 75 6f 01 01 e8 8c 11 fc ff <0f> 0b e9 d7 fe ff ff 8b b5 80 01 00 00 85 f6 0f 85 55 fe ff ff e9 | |
[ +0.000001] RSP: 0018:ffffa41840110e78 EFLAGS: 00010096 | |
[ +0.000001] RAX: 000000000000002d RBX: 0000000000000000 RCX: 00000000ffffffff | |
[ +0.000001] RDX: 000000000000000d RSI: ffffffff84274460 RDI: 0000000000000046 | |
[ +0.000000] RBP: ffff9812b7daaf00 R08: 71725f7366635f66 R09: 7473696c5f71725f | |
[ +0.000001] R10: 203d212068636e61 R11: 61656c3e2d717226 R12: ffff9812b7daae80 | |
[ +0.000001] R13: ffff9812b7daae80 R14: 0000000000000000 R15: ffff9812b13b8000 | |
[ +0.000001] FS: 0000000000000000(0000) GS:ffff9812b7d80000(0000) knlGS:0000000000000000 | |
[ +0.000000] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 | |
[ +0.000001] CR2: 000000c000166000 CR3: 0000000163400000 CR4: 00000000000006e0 | |
[ +0.000003] Call Trace: | |
[ +0.000009] <IRQ> | |
[ +0.000012] try_to_wake_up+0x227/0x770 | |
[ +0.000003] ? rcu_do_batch+0x381/0x3e0 | |
[ +0.000003] ? __hrtimer_init+0xd0/0xd0 | |
[ +0.000001] hrtimer_wakeup+0x1e/0x30 | |
[ +0.000004] __hrtimer_run_queues+0x118/0x280 | |
[ +0.000002] hrtimer_interrupt+0x10e/0x280 | |
[ +0.000003] smp_apic_timer_interrupt+0x6e/0x130 | |
[ +0.000006] apic_timer_interrupt+0xf/0x20 | |
[ +0.000003] </IRQ> | |
[ +0.000002] RIP: 0010:native_safe_halt+0xe/0x10 | |
[ +0.000001] Code: 02 20 48 8b 00 a8 08 75 c4 e9 7b ff ff ff cc cc cc cc cc cc cc cc cc cc cc cc cc cc e9 07 00 00 00 0f 00 2d 06 66 5d 00 fb f4 <c3> 90 e9 07 00 00 00 0f 00 2d f6 65 5d 00 f4 c3 cc cc 0f 1f 44 00 | |
[ +0.000000] RSP: 0018:ffffa41840083ee8 EFLAGS: 00000246 ORIG_RAX: ffffffffffffff13 | |
[ +0.000001] RAX: ffffffff82a31f90 RBX: ffff9812b6e70000 RCX: 0000000000000000 | |
[ +0.000001] RDX: 0000000000000003 RSI: 0000000000000087 RDI: 0000000000000003 | |
[ +0.000000] RBP: 0000000000000003 R08: ffff9812b7d9e4e0 R09: 0000000000000000 | |
[ +0.000001] R10: 00000000000313ae R11: 0000000000000000 R12: 0000000000000000 | |
[ +0.000000] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000 | |
[ +0.000002] ? __sched_text_end+0x1/0x1 | |
[ +0.000018] default_idle+0x1a/0x140 | |
[ +0.000001] do_idle+0x1cb/0x240 | |
[ +0.000002] cpu_startup_entry+0x19/0x20 | |
[ +0.000003] secondary_startup_64+0xb6/0xc0 | |
[ +0.000004] ---[ end trace 6eb621199bcb7c81 ]--- | |
==> etcd [038315f0f313] <== | |
2020-11-04 15:07:46.686437 I | etcdserver/api/etcdhttp: /health OK (status code 200) | |
2020-11-04 15:07:49.439168 I | mvcc: store.index: compact 1954 | |
2020-11-04 15:07:49.440797 I | mvcc: finished scheduled compaction at 1954 (took 1.092754ms) | |
2020-11-04 15:07:56.684848 I | etcdserver/api/etcdhttp: /health OK (status code 200) | |
2020-11-04 15:08:06.685935 I | etcdserver/api/etcdhttp: /health OK (status code 200) | |
2020-11-04 15:08:16.686509 I | etcdserver/api/etcdhttp: /health OK (status code 200) | |
2020-11-04 15:08:26.684797 I | etcdserver/api/etcdhttp: /health OK (status code 200) | |
2020-11-04 15:08:36.685489 I | etcdserver/api/etcdhttp: /health OK (status code 200) | |
2020-11-04 15:08:46.686081 I | etcdserver/api/etcdhttp: /health OK (status code 200) | |
2020-11-04 15:08:56.684863 I | etcdserver/api/etcdhttp: /health OK (status code 200) | |
2020-11-04 15:09:06.684818 I | etcdserver/api/etcdhttp: /health OK (status code 200) | |
2020-11-04 15:09:16.685975 I | etcdserver/api/etcdhttp: /health OK (status code 200) | |
2020-11-04 15:09:26.686860 I | etcdserver/api/etcdhttp: /health OK (status code 200) | |
2020-11-04 15:09:36.686134 I | etcdserver/api/etcdhttp: /health OK (status code 200) | |
2020-11-04 15:09:46.685856 I | etcdserver/api/etcdhttp: /health OK (status code 200) | |
2020-11-04 15:09:56.684878 I | etcdserver/api/etcdhttp: /health OK (status code 200) | |
2020-11-04 15:10:06.685267 I | etcdserver/api/etcdhttp: /health OK (status code 200) | |
2020-11-04 15:10:16.686544 I | etcdserver/api/etcdhttp: /health OK (status code 200) | |
2020-11-04 15:10:26.685533 I | etcdserver/api/etcdhttp: /health OK (status code 200) | |
2020-11-04 15:10:36.686583 I | etcdserver/api/etcdhttp: /health OK (status code 200) | |
2020-11-04 15:10:46.686825 I | etcdserver/api/etcdhttp: /health OK (status code 200) | |
2020-11-04 15:10:56.685697 I | etcdserver/api/etcdhttp: /health OK (status code 200) | |
2020-11-04 15:11:06.686154 I | etcdserver/api/etcdhttp: /health OK (status code 200) | |
2020-11-04 15:11:16.686279 I | etcdserver/api/etcdhttp: /health OK (status code 200) | |
2020-11-04 15:11:26.686440 I | etcdserver/api/etcdhttp: /health OK (status code 200) | |
2020-11-04 15:11:36.685438 I | etcdserver/api/etcdhttp: /health OK (status code 200) | |
2020-11-04 15:11:46.686647 I | etcdserver/api/etcdhttp: /health OK (status code 200) | |
2020-11-04 15:11:56.684679 I | etcdserver/api/etcdhttp: /health OK (status code 200) | |
2020-11-04 15:12:06.685449 I | etcdserver/api/etcdhttp: /health OK (status code 200) | |
2020-11-04 15:12:16.686119 I | etcdserver/api/etcdhttp: /health OK (status code 200) | |
2020-11-04 15:12:26.685774 I | etcdserver/api/etcdhttp: /health OK (status code 200) | |
2020-11-04 15:12:36.685910 I | etcdserver/api/etcdhttp: /health OK (status code 200) | |
2020-11-04 15:12:46.686808 I | etcdserver/api/etcdhttp: /health OK (status code 200) | |
2020-11-04 15:12:49.447599 I | mvcc: store.index: compact 2176 | |
2020-11-04 15:12:49.448982 I | mvcc: finished scheduled compaction at 2176 (took 1.048487ms) | |
2020-11-04 15:12:56.686561 I | etcdserver/api/etcdhttp: /health OK (status code 200) | |
2020-11-04 15:13:06.688163 I | etcdserver/api/etcdhttp: /health OK (status code 200) | |
2020-11-04 15:13:16.685574 I | etcdserver/api/etcdhttp: /health OK (status code 200) | |
2020-11-04 15:13:26.685167 I | etcdserver/api/etcdhttp: /health OK (status code 200) | |
2020-11-04 15:13:36.684810 I | etcdserver/api/etcdhttp: /health OK (status code 200) | |
2020-11-04 15:13:46.685691 I | etcdserver/api/etcdhttp: /health OK (status code 200) | |
2020-11-04 15:13:56.685521 I | etcdserver/api/etcdhttp: /health OK (status code 200) | |
2020-11-04 15:14:06.687050 I | etcdserver/api/etcdhttp: /health OK (status code 200) | |
2020-11-04 15:14:16.686223 I | etcdserver/api/etcdhttp: /health OK (status code 200) | |
2020-11-04 15:14:26.685023 I | etcdserver/api/etcdhttp: /health OK (status code 200) | |
2020-11-04 15:14:36.686082 I | etcdserver/api/etcdhttp: /health OK (status code 200) | |
2020-11-04 15:14:46.686858 I | etcdserver/api/etcdhttp: /health OK (status code 200) | |
2020-11-04 15:14:56.685603 I | etcdserver/api/etcdhttp: /health OK (status code 200) | |
2020-11-04 15:15:06.684717 I | etcdserver/api/etcdhttp: /health OK (status code 200) | |
2020-11-04 15:15:16.686172 I | etcdserver/api/etcdhttp: /health OK (status code 200) | |
2020-11-04 15:15:26.687364 I | etcdserver/api/etcdhttp: /health OK (status code 200) | |
2020-11-04 15:15:36.686650 I | etcdserver/api/etcdhttp: /health OK (status code 200) | |
2020-11-04 15:15:46.685461 I | etcdserver/api/etcdhttp: /health OK (status code 200) | |
2020-11-04 15:15:56.686351 I | etcdserver/api/etcdhttp: /health OK (status code 200) | |
2020-11-04 15:16:06.685732 I | etcdserver/api/etcdhttp: /health OK (status code 200) | |
2020-11-04 15:16:16.686741 I | etcdserver/api/etcdhttp: /health OK (status code 200) | |
2020-11-04 15:16:26.685522 I | etcdserver/api/etcdhttp: /health OK (status code 200) | |
2020-11-04 15:16:36.687056 I | etcdserver/api/etcdhttp: /health OK (status code 200) | |
2020-11-04 15:16:46.686876 I | etcdserver/api/etcdhttp: /health OK (status code 200) | |
2020-11-04 15:16:56.684855 I | etcdserver/api/etcdhttp: /health OK (status code 200) | |
==> kernel <== | |
15:17:04 up 20:02, 0 users, load average: 0.24, 0.42, 0.51 | |
Linux minikube 5.6.6-300.fc32.x86_64 #1 SMP Tue Apr 21 13:44:19 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | |
PRETTY_NAME="Ubuntu 20.04 LTS" | |
==> kube-apiserver [ee494f087166] <== | |
I1104 15:04:46.007414 1 client.go:360] parsed scheme: "passthrough" | |
I1104 15:04:46.007669 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>} | |
I1104 15:04:46.007741 1 clientconn.go:948] ClientConn switching balancer to "pick_first" | |
I1104 15:05:20.235003 1 client.go:360] parsed scheme: "passthrough" | |
I1104 15:05:20.235112 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>} | |
I1104 15:05:20.235129 1 clientconn.go:948] ClientConn switching balancer to "pick_first" | |
I1104 15:06:04.902165 1 client.go:360] parsed scheme: "passthrough" | |
I1104 15:06:04.902360 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>} | |
I1104 15:06:04.902394 1 clientconn.go:948] ClientConn switching balancer to "pick_first" | |
I1104 15:06:49.606533 1 client.go:360] parsed scheme: "passthrough" | |
I1104 15:06:49.606876 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>} | |
I1104 15:06:49.606954 1 clientconn.go:948] ClientConn switching balancer to "pick_first" | |
I1104 15:07:27.136881 1 client.go:360] parsed scheme: "passthrough" | |
I1104 15:07:27.137337 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>} | |
I1104 15:07:27.137498 1 clientconn.go:948] ClientConn switching balancer to "pick_first" | |
I1104 15:08:07.876638 1 client.go:360] parsed scheme: "passthrough" | |
I1104 15:08:07.877080 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>} | |
I1104 15:08:07.877215 1 clientconn.go:948] ClientConn switching balancer to "pick_first" | |
I1104 15:08:52.508803 1 client.go:360] parsed scheme: "passthrough" | |
I1104 15:08:52.509131 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>} | |
I1104 15:08:52.509245 1 clientconn.go:948] ClientConn switching balancer to "pick_first" | |
I1104 15:09:31.920887 1 client.go:360] parsed scheme: "passthrough" | |
I1104 15:09:31.921122 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>} | |
I1104 15:09:31.921161 1 clientconn.go:948] ClientConn switching balancer to "pick_first" | |
I1104 15:10:02.316991 1 client.go:360] parsed scheme: "passthrough" | |
I1104 15:10:02.317230 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>} | |
I1104 15:10:02.317445 1 clientconn.go:948] ClientConn switching balancer to "pick_first" | |
I1104 15:10:36.237613 1 client.go:360] parsed scheme: "passthrough" | |
I1104 15:10:36.238032 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>} | |
I1104 15:10:36.238227 1 clientconn.go:948] ClientConn switching balancer to "pick_first" | |
I1104 15:11:12.836578 1 client.go:360] parsed scheme: "passthrough" | |
I1104 15:11:12.836697 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>} | |
I1104 15:11:12.836720 1 clientconn.go:948] ClientConn switching balancer to "pick_first" | |
I1104 15:11:47.592487 1 client.go:360] parsed scheme: "passthrough" | |
I1104 15:11:47.592814 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>} | |
I1104 15:11:47.592911 1 clientconn.go:948] ClientConn switching balancer to "pick_first" | |
I1104 15:12:24.520407 1 client.go:360] parsed scheme: "passthrough" | |
I1104 15:12:24.520660 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>} | |
I1104 15:12:24.520731 1 clientconn.go:948] ClientConn switching balancer to "pick_first" | |
I1104 15:13:05.952718 1 client.go:360] parsed scheme: "passthrough" | |
I1104 15:13:05.953033 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>} | |
I1104 15:13:05.953102 1 clientconn.go:948] ClientConn switching balancer to "pick_first" | |
I1104 15:13:43.500558 1 client.go:360] parsed scheme: "passthrough" | |
I1104 15:13:43.500681 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>} | |
I1104 15:13:43.500711 1 clientconn.go:948] ClientConn switching balancer to "pick_first" | |
I1104 15:14:26.279416 1 client.go:360] parsed scheme: "passthrough" | |
I1104 15:14:26.279630 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>} | |
I1104 15:14:26.279690 1 clientconn.go:948] ClientConn switching balancer to "pick_first" | |
I1104 15:15:10.306919 1 client.go:360] parsed scheme: "passthrough" | |
I1104 15:15:10.307455 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>} | |
I1104 15:15:10.307640 1 clientconn.go:948] ClientConn switching balancer to "pick_first" | |
I1104 15:15:47.419007 1 client.go:360] parsed scheme: "passthrough" | |
I1104 15:15:47.419220 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>} | |
I1104 15:15:47.419261 1 clientconn.go:948] ClientConn switching balancer to "pick_first" | |
I1104 15:16:27.485600 1 client.go:360] parsed scheme: "passthrough" | |
I1104 15:16:27.485760 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>} | |
I1104 15:16:27.485799 1 clientconn.go:948] ClientConn switching balancer to "pick_first" | |
I1104 15:17:03.028977 1 client.go:360] parsed scheme: "passthrough" | |
I1104 15:17:03.029228 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>} | |
I1104 15:17:03.029347 1 clientconn.go:948] ClientConn switching balancer to "pick_first" | |
==> kube-controller-manager [856aaf0e58bf] <== | |
I1104 14:33:07.248493 1 shared_informer.go:240] Waiting for caches to sync for service account | |
I1104 14:33:07.498374 1 controllermanager.go:549] Started "replicaset" | |
W1104 14:33:07.498413 1 controllermanager.go:541] Skipping "ephemeral-volume" | |
I1104 14:33:07.498972 1 replica_set.go:182] Starting replicaset controller | |
I1104 14:33:07.498998 1 shared_informer.go:240] Waiting for caches to sync for ReplicaSet | |
I1104 14:33:07.500241 1 shared_informer.go:240] Waiting for caches to sync for resource quota | |
W1104 14:33:07.513943 1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="minikube" does not exist | |
I1104 14:33:07.517351 1 shared_informer.go:247] Caches are synced for TTL | |
I1104 14:33:07.519997 1 shared_informer.go:247] Caches are synced for job | |
I1104 14:33:07.533093 1 event.go:291] "Event occurred" object="kube-system/ingress-nginx-admission-patch" kind="Job" apiVersion="batch/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: ingress-nginx-admission-patch-2dk5b" | |
I1104 14:33:07.533858 1 event.go:291] "Event occurred" object="kube-system/ingress-nginx-admission-create" kind="Job" apiVersion="batch/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: ingress-nginx-admission-create-x6qkp" | |
I1104 14:33:07.533964 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kube-apiserver-client | |
I1104 14:33:07.534045 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-legacy-unknown | |
I1104 14:33:07.534079 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-serving | |
I1104 14:33:07.534121 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-client | |
I1104 14:33:07.542723 1 shared_informer.go:247] Caches are synced for stateful set | |
I1104 14:33:07.548500 1 shared_informer.go:247] Caches are synced for PV protection | |
I1104 14:33:07.548741 1 shared_informer.go:247] Caches are synced for certificate-csrapproving | |
I1104 14:33:07.548843 1 shared_informer.go:247] Caches are synced for service account | |
I1104 14:33:07.549432 1 shared_informer.go:247] Caches are synced for expand | |
I1104 14:33:07.549712 1 shared_informer.go:247] Caches are synced for ClusterRoleAggregator | |
I1104 14:33:07.558986 1 shared_informer.go:247] Caches are synced for namespace | |
I1104 14:33:07.626361 1 shared_informer.go:247] Caches are synced for GC | |
I1104 14:33:07.626405 1 shared_informer.go:247] Caches are synced for PVC protection | |
I1104 14:33:07.626372 1 shared_informer.go:247] Caches are synced for ReplicationController | |
I1104 14:33:07.626465 1 shared_informer.go:247] Caches are synced for bootstrap_signer | |
I1104 14:33:07.626494 1 shared_informer.go:247] Caches are synced for HPA | |
I1104 14:33:07.626791 1 shared_informer.go:247] Caches are synced for endpoint_slice | |
I1104 14:33:07.627275 1 shared_informer.go:247] Caches are synced for persistent volume | |
I1104 14:33:07.632035 1 shared_informer.go:247] Caches are synced for endpoint | |
E1104 14:33:07.668692 1 clusterroleaggregation_controller.go:181] view failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "view": the object has been modified; please apply your changes to the latest version and try again | |
I1104 14:33:07.698710 1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring | |
I1104 14:33:07.699378 1 shared_informer.go:247] Caches are synced for disruption | |
I1104 14:33:07.699512 1 disruption.go:339] Sending events to api server. | |
I1104 14:33:07.699085 1 shared_informer.go:247] Caches are synced for ReplicaSet | |
I1104 14:33:07.750182 1 shared_informer.go:247] Caches are synced for deployment | |
I1104 14:33:07.750555 1 shared_informer.go:247] Caches are synced for daemon sets | |
I1104 14:33:07.773442 1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-f9fd979d6 to 1" | |
I1104 14:33:07.778823 1 event.go:291] "Event occurred" object="kube-system/ingress-nginx-controller" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set ingress-nginx-controller-799c9469f7 to 1" | |
I1104 14:33:07.783732 1 event.go:291] "Event occurred" object="kube-system/coredns-f9fd979d6" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-f9fd979d6-j29xl" | |
I1104 14:33:07.784108 1 shared_informer.go:247] Caches are synced for attach detach | |
I1104 14:33:07.795347 1 shared_informer.go:247] Caches are synced for resource quota | |
I1104 14:33:07.795751 1 event.go:291] "Event occurred" object="kube-system/ingress-nginx-controller-799c9469f7" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: ingress-nginx-controller-799c9469f7-65qpz" | |
I1104 14:33:07.826487 1 shared_informer.go:247] Caches are synced for taint | |
I1104 14:33:07.827516 1 node_lifecycle_controller.go:1429] Initializing eviction metric for zone: | |
W1104 14:33:07.827633 1 node_lifecycle_controller.go:1044] Missing timestamp for Node minikube. Assuming now as a timestamp. | |
I1104 14:33:07.827725 1 node_lifecycle_controller.go:1195] Controller detected that all Nodes are not-Ready. Entering master disruption mode. | |
I1104 14:33:07.826744 1 shared_informer.go:247] Caches are synced for resource quota | |
I1104 14:33:07.827947 1 taint_manager.go:187] Starting NoExecuteTaintManager | |
I1104 14:33:07.828047 1 event.go:291] "Event occurred" object="minikube" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node minikube event: Registered Node minikube in Controller" | |
I1104 14:33:07.860393 1 shared_informer.go:240] Waiting for caches to sync for garbage collector | |
I1104 14:33:07.867494 1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-wsc4c" | |
E1104 14:33:07.948709 1 daemon_controller.go:320] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kube-proxy", UID:"7deb28c2-20bf-46b0-af9f-9614c99175c2", ResourceVersion:"221", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63740097181, loc:(*time.Location)(0x6a59c80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc001b8cf00), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001b8cf20)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc001b8cf40), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc00172fa40), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001b8cf60), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001b8cf80), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.19.2", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc001b8cfc0)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc00191ea80), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc00133be38), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0004dfb90), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc00019e430)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc00133be88)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again | |
I1104 14:33:08.163435 1 shared_informer.go:247] Caches are synced for garbage collector | |
I1104 14:33:08.198138 1 shared_informer.go:247] Caches are synced for garbage collector | |
I1104 14:33:08.198193 1 garbagecollector.go:137] Garbage collector: all resource monitors have synced. Proceeding to collect garbage | |
I1104 14:33:22.829113 1 node_lifecycle_controller.go:1222] Controller detected that some Nodes are Ready. Exiting master disruption mode. | |
I1104 14:47:00.013521 1 event.go:291] "Event occurred" object="kube-system/ingress-nginx-controller-799c9469f7" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: ingress-nginx-controller-799c9469f7-2j2mk" | |
I1104 14:47:12.565265 1 event.go:291] "Event occurred" object="kube-system/ingress-nginx-admission-patch" kind="Job" apiVersion="batch/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: ingress-nginx-admission-patch-k74gx" | |
I1104 14:47:24.932678 1 event.go:291] "Event occurred" object="kube-system/ingress-nginx-admission-create" kind="Job" apiVersion="batch/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: ingress-nginx-admission-create-tr846" | |
==> kube-proxy [186080efe05c] <== | |
I1104 14:33:09.611503 1 node.go:136] Successfully retrieved node IP: 192.168.49.2 | |
I1104 14:33:09.611765 1 server_others.go:111] kube-proxy node IP is an IPv4 address (192.168.49.2), assume IPv4 operation | |
W1104 14:33:09.657727 1 server_others.go:579] Unknown proxy mode "", assuming iptables proxy | |
I1104 14:33:09.658130 1 server_others.go:186] Using iptables Proxier. | |
W1104 14:33:09.658249 1 server_others.go:456] detect-local-mode set to ClusterCIDR, but no cluster CIDR defined | |
I1104 14:33:09.658333 1 server_others.go:467] detect-local-mode: ClusterCIDR , defaulting to no-op detect-local | |
I1104 14:33:09.658876 1 server.go:650] Version: v1.19.2 | |
I1104 14:33:09.659626 1 conntrack.go:52] Setting nf_conntrack_max to 131072 | |
I1104 14:33:09.659813 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400 | |
I1104 14:33:09.659993 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600 | |
I1104 14:33:09.660355 1 config.go:315] Starting service config controller | |
I1104 14:33:09.660535 1 shared_informer.go:240] Waiting for caches to sync for service config | |
I1104 14:33:09.660400 1 config.go:224] Starting endpoint slice config controller | |
I1104 14:33:09.662234 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config | |
I1104 14:33:09.760950 1 shared_informer.go:247] Caches are synced for service config | |
I1104 14:33:09.762778 1 shared_informer.go:247] Caches are synced for endpoint slice config | |
==> kube-scheduler [a22e98d48c5a] <== | |
I1104 14:32:48.540175 1 registry.go:173] Registering SelectorSpread plugin | |
I1104 14:32:48.540887 1 registry.go:173] Registering SelectorSpread plugin | |
I1104 14:32:50.462105 1 serving.go:331] Generated self-signed cert in-memory | |
W1104 14:32:57.931165 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA' | |
W1104 14:32:57.931220 1 authentication.go:294] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system" | |
W1104 14:32:57.931305 1 authentication.go:295] Continuing without authentication configuration. This may treat all requests as anonymous. | |
W1104 14:32:57.931314 1 authentication.go:296] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false | |
I1104 14:32:58.030464 1 registry.go:173] Registering SelectorSpread plugin | |
I1104 14:32:58.030496 1 registry.go:173] Registering SelectorSpread plugin | |
I1104 14:32:58.034753 1 secure_serving.go:197] Serving securely on 127.0.0.1:10259 | |
I1104 14:32:58.035542 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file | |
I1104 14:32:58.035562 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file | |
I1104 14:32:58.035583 1 tlsconfig.go:240] Starting DynamicServingCertificateController | |
E1104 14:32:58.041124 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope | |
E1104 14:32:58.041126 1 reflector.go:127] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" | |
E1104 14:32:58.044357 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope | |
E1104 14:32:58.044558 1 reflector.go:127] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:188: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope | |
E1104 14:32:58.044554 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope | |
E1104 14:32:58.044716 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope | |
E1104 14:32:58.044922 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope | |
E1104 14:32:58.045060 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope | |
E1104 14:32:58.045331 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope | |
E1104 14:32:58.045374 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope | |
E1104 14:32:58.045506 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope | |
E1104 14:32:58.045612 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope | |
E1104 14:32:58.045682 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope | |
E1104 14:32:58.984108 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope | |
E1104 14:32:59.142563 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope | |
E1104 14:32:59.148935 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope | |
E1104 14:32:59.248894 1 reflector.go:127] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" | |
I1104 14:33:02.335742 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file | |
==> kubelet <== | |
-- Logs begin at Wed 2020-11-04 14:31:59 UTC, end at Wed 2020-11-04 15:17:04 UTC. -- | |
Nov 04 15:11:50 minikube kubelet[2093]: E1104 15:11:50.076400 2093 pod_workers.go:191] Error syncing pod f7b92a1c-5b2c-4bc8-8e79-ef944b4699a5 ("ingress-nginx-admission-create-tr846_kube-system(f7b92a1c-5b2c-4bc8-8e79-ef944b4699a5)"), skipping: failed to "StartContainer" for "create" with ImagePullBackOff: "Back-off pulling image \"jettech/kube-webhook-certgen:v1.2.2\"" | |
Nov 04 15:11:55 minikube kubelet[2093]: E1104 15:11:55.074520 2093 pod_workers.go:191] Error syncing pod 175606cb-fd1f-453f-a4eb-6e93bdcffba5 ("ingress-nginx-admission-patch-k74gx_kube-system(175606cb-fd1f-453f-a4eb-6e93bdcffba5)"), skipping: failed to "StartContainer" for "patch" with ImagePullBackOff: "Back-off pulling image \"jettech/kube-webhook-certgen:v1.2.2\"" | |
Nov 04 15:12:04 minikube kubelet[2093]: E1104 15:12:04.074705 2093 pod_workers.go:191] Error syncing pod f7b92a1c-5b2c-4bc8-8e79-ef944b4699a5 ("ingress-nginx-admission-create-tr846_kube-system(f7b92a1c-5b2c-4bc8-8e79-ef944b4699a5)"), skipping: failed to "StartContainer" for "create" with ImagePullBackOff: "Back-off pulling image \"jettech/kube-webhook-certgen:v1.2.2\"" | |
Nov 04 15:12:08 minikube kubelet[2093]: E1104 15:12:08.075019 2093 pod_workers.go:191] Error syncing pod 175606cb-fd1f-453f-a4eb-6e93bdcffba5 ("ingress-nginx-admission-patch-k74gx_kube-system(175606cb-fd1f-453f-a4eb-6e93bdcffba5)"), skipping: failed to "StartContainer" for "patch" with ImagePullBackOff: "Back-off pulling image \"jettech/kube-webhook-certgen:v1.2.2\"" | |
Nov 04 15:12:17 minikube kubelet[2093]: E1104 15:12:17.076895 2093 pod_workers.go:191] Error syncing pod f7b92a1c-5b2c-4bc8-8e79-ef944b4699a5 ("ingress-nginx-admission-create-tr846_kube-system(f7b92a1c-5b2c-4bc8-8e79-ef944b4699a5)"), skipping: failed to "StartContainer" for "create" with ImagePullBackOff: "Back-off pulling image \"jettech/kube-webhook-certgen:v1.2.2\"" | |
Nov 04 15:12:21 minikube kubelet[2093]: E1104 15:12:21.079156 2093 pod_workers.go:191] Error syncing pod 175606cb-fd1f-453f-a4eb-6e93bdcffba5 ("ingress-nginx-admission-patch-k74gx_kube-system(175606cb-fd1f-453f-a4eb-6e93bdcffba5)"), skipping: failed to "StartContainer" for "patch" with ImagePullBackOff: "Back-off pulling image \"jettech/kube-webhook-certgen:v1.2.2\"" | |
Nov 04 15:12:30 minikube kubelet[2093]: E1104 15:12:30.074368 2093 pod_workers.go:191] Error syncing pod f7b92a1c-5b2c-4bc8-8e79-ef944b4699a5 ("ingress-nginx-admission-create-tr846_kube-system(f7b92a1c-5b2c-4bc8-8e79-ef944b4699a5)"), skipping: failed to "StartContainer" for "create" with ImagePullBackOff: "Back-off pulling image \"jettech/kube-webhook-certgen:v1.2.2\"" | |
Nov 04 15:12:35 minikube kubelet[2093]: E1104 15:12:35.080506 2093 pod_workers.go:191] Error syncing pod 175606cb-fd1f-453f-a4eb-6e93bdcffba5 ("ingress-nginx-admission-patch-k74gx_kube-system(175606cb-fd1f-453f-a4eb-6e93bdcffba5)"), skipping: failed to "StartContainer" for "patch" with ImagePullBackOff: "Back-off pulling image \"jettech/kube-webhook-certgen:v1.2.2\"" | |
Nov 04 15:12:42 minikube kubelet[2093]: E1104 15:12:42.074892 2093 pod_workers.go:191] Error syncing pod f7b92a1c-5b2c-4bc8-8e79-ef944b4699a5 ("ingress-nginx-admission-create-tr846_kube-system(f7b92a1c-5b2c-4bc8-8e79-ef944b4699a5)"), skipping: failed to "StartContainer" for "create" with ImagePullBackOff: "Back-off pulling image \"jettech/kube-webhook-certgen:v1.2.2\"" | |
Nov 04 15:12:50 minikube kubelet[2093]: E1104 15:12:50.082340 2093 pod_workers.go:191] Error syncing pod 175606cb-fd1f-453f-a4eb-6e93bdcffba5 ("ingress-nginx-admission-patch-k74gx_kube-system(175606cb-fd1f-453f-a4eb-6e93bdcffba5)"), skipping: failed to "StartContainer" for "patch" with ImagePullBackOff: "Back-off pulling image \"jettech/kube-webhook-certgen:v1.2.2\"" | |
Nov 04 15:12:53 minikube kubelet[2093]: E1104 15:12:53.079221 2093 pod_workers.go:191] Error syncing pod f7b92a1c-5b2c-4bc8-8e79-ef944b4699a5 ("ingress-nginx-admission-create-tr846_kube-system(f7b92a1c-5b2c-4bc8-8e79-ef944b4699a5)"), skipping: failed to "StartContainer" for "create" with ImagePullBackOff: "Back-off pulling image \"jettech/kube-webhook-certgen:v1.2.2\"" | |
Nov 04 15:13:02 minikube kubelet[2093]: E1104 15:13:02.078853 2093 pod_workers.go:191] Error syncing pod 175606cb-fd1f-453f-a4eb-6e93bdcffba5 ("ingress-nginx-admission-patch-k74gx_kube-system(175606cb-fd1f-453f-a4eb-6e93bdcffba5)"), skipping: failed to "StartContainer" for "patch" with ImagePullBackOff: "Back-off pulling image \"jettech/kube-webhook-certgen:v1.2.2\"" | |
Nov 04 15:13:07 minikube kubelet[2093]: E1104 15:13:07.077939 2093 pod_workers.go:191] Error syncing pod f7b92a1c-5b2c-4bc8-8e79-ef944b4699a5 ("ingress-nginx-admission-create-tr846_kube-system(f7b92a1c-5b2c-4bc8-8e79-ef944b4699a5)"), skipping: failed to "StartContainer" for "create" with ImagePullBackOff: "Back-off pulling image \"jettech/kube-webhook-certgen:v1.2.2\"" | |
Nov 04 15:13:16 minikube kubelet[2093]: E1104 15:13:16.080113 2093 pod_workers.go:191] Error syncing pod 175606cb-fd1f-453f-a4eb-6e93bdcffba5 ("ingress-nginx-admission-patch-k74gx_kube-system(175606cb-fd1f-453f-a4eb-6e93bdcffba5)"), skipping: failed to "StartContainer" for "patch" with ImagePullBackOff: "Back-off pulling image \"jettech/kube-webhook-certgen:v1.2.2\"" | |
Nov 04 15:13:22 minikube kubelet[2093]: E1104 15:13:22.077937 2093 pod_workers.go:191] Error syncing pod f7b92a1c-5b2c-4bc8-8e79-ef944b4699a5 ("ingress-nginx-admission-create-tr846_kube-system(f7b92a1c-5b2c-4bc8-8e79-ef944b4699a5)"), skipping: failed to "StartContainer" for "create" with ImagePullBackOff: "Back-off pulling image \"jettech/kube-webhook-certgen:v1.2.2\"" | |
Nov 04 15:13:29 minikube kubelet[2093]: E1104 15:13:29.074737 2093 pod_workers.go:191] Error syncing pod 175606cb-fd1f-453f-a4eb-6e93bdcffba5 ("ingress-nginx-admission-patch-k74gx_kube-system(175606cb-fd1f-453f-a4eb-6e93bdcffba5)"), skipping: failed to "StartContainer" for "patch" with ImagePullBackOff: "Back-off pulling image \"jettech/kube-webhook-certgen:v1.2.2\"" | |
Nov 04 15:13:34 minikube kubelet[2093]: E1104 15:13:34.077848 2093 pod_workers.go:191] Error syncing pod f7b92a1c-5b2c-4bc8-8e79-ef944b4699a5 ("ingress-nginx-admission-create-tr846_kube-system(f7b92a1c-5b2c-4bc8-8e79-ef944b4699a5)"), skipping: failed to "StartContainer" for "create" with ImagePullBackOff: "Back-off pulling image \"jettech/kube-webhook-certgen:v1.2.2\"" | |
Nov 04 15:13:40 minikube kubelet[2093]: E1104 15:13:40.570473 2093 secret.go:195] Couldn't get secret kube-system/ingress-nginx-admission: secret "ingress-nginx-admission" not found | |
Nov 04 15:13:40 minikube kubelet[2093]: E1104 15:13:40.571549 2093 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/secret/209b8809-ed33-4c6c-9247-8a652bb5f761-webhook-cert podName:209b8809-ed33-4c6c-9247-8a652bb5f761 nodeName:}" failed. No retries permitted until 2020-11-04 15:15:42.570700556 +0000 UTC m=+2561.373771210 (durationBeforeRetry 2m2s). Error: "MountVolume.SetUp failed for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/209b8809-ed33-4c6c-9247-8a652bb5f761-webhook-cert\") pod \"ingress-nginx-controller-799c9469f7-2j2mk\" (UID: \"209b8809-ed33-4c6c-9247-8a652bb5f761\") : secret \"ingress-nginx-admission\" not found" | |
Nov 04 15:13:44 minikube kubelet[2093]: E1104 15:13:44.074878 2093 pod_workers.go:191] Error syncing pod 175606cb-fd1f-453f-a4eb-6e93bdcffba5 ("ingress-nginx-admission-patch-k74gx_kube-system(175606cb-fd1f-453f-a4eb-6e93bdcffba5)"), skipping: failed to "StartContainer" for "patch" with ImagePullBackOff: "Back-off pulling image \"jettech/kube-webhook-certgen:v1.2.2\"" | |
Nov 04 15:13:47 minikube kubelet[2093]: E1104 15:13:47.081134 2093 pod_workers.go:191] Error syncing pod f7b92a1c-5b2c-4bc8-8e79-ef944b4699a5 ("ingress-nginx-admission-create-tr846_kube-system(f7b92a1c-5b2c-4bc8-8e79-ef944b4699a5)"), skipping: failed to "StartContainer" for "create" with ImagePullBackOff: "Back-off pulling image \"jettech/kube-webhook-certgen:v1.2.2\"" | |
Nov 04 15:13:55 minikube kubelet[2093]: E1104 15:13:55.078359 2093 pod_workers.go:191] Error syncing pod 175606cb-fd1f-453f-a4eb-6e93bdcffba5 ("ingress-nginx-admission-patch-k74gx_kube-system(175606cb-fd1f-453f-a4eb-6e93bdcffba5)"), skipping: failed to "StartContainer" for "patch" with ImagePullBackOff: "Back-off pulling image \"jettech/kube-webhook-certgen:v1.2.2\"" | |
Nov 04 15:14:02 minikube kubelet[2093]: E1104 15:14:02.075962 2093 kubelet.go:1594] Unable to attach or mount volumes for pod "ingress-nginx-controller-799c9469f7-2j2mk_kube-system(209b8809-ed33-4c6c-9247-8a652bb5f761)": unmounted volumes=[webhook-cert], unattached volumes=[webhook-cert ingress-nginx-token-tb4kg]: timed out waiting for the condition; skipping pod | |
Nov 04 15:14:02 minikube kubelet[2093]: E1104 15:14:02.079127 2093 pod_workers.go:191] Error syncing pod 209b8809-ed33-4c6c-9247-8a652bb5f761 ("ingress-nginx-controller-799c9469f7-2j2mk_kube-system(209b8809-ed33-4c6c-9247-8a652bb5f761)"), skipping: unmounted volumes=[webhook-cert], unattached volumes=[webhook-cert ingress-nginx-token-tb4kg]: timed out waiting for the condition | |
Nov 04 15:14:02 minikube kubelet[2093]: E1104 15:14:02.084696 2093 pod_workers.go:191] Error syncing pod f7b92a1c-5b2c-4bc8-8e79-ef944b4699a5 ("ingress-nginx-admission-create-tr846_kube-system(f7b92a1c-5b2c-4bc8-8e79-ef944b4699a5)"), skipping: failed to "StartContainer" for "create" with ImagePullBackOff: "Back-off pulling image \"jettech/kube-webhook-certgen:v1.2.2\"" | |
Nov 04 15:14:07 minikube kubelet[2093]: E1104 15:14:07.078028 2093 pod_workers.go:191] Error syncing pod 175606cb-fd1f-453f-a4eb-6e93bdcffba5 ("ingress-nginx-admission-patch-k74gx_kube-system(175606cb-fd1f-453f-a4eb-6e93bdcffba5)"), skipping: failed to "StartContainer" for "patch" with ImagePullBackOff: "Back-off pulling image \"jettech/kube-webhook-certgen:v1.2.2\"" | |
Nov 04 15:14:14 minikube kubelet[2093]: E1104 15:14:14.075421 2093 pod_workers.go:191] Error syncing pod f7b92a1c-5b2c-4bc8-8e79-ef944b4699a5 ("ingress-nginx-admission-create-tr846_kube-system(f7b92a1c-5b2c-4bc8-8e79-ef944b4699a5)"), skipping: failed to "StartContainer" for "create" with ImagePullBackOff: "Back-off pulling image \"jettech/kube-webhook-certgen:v1.2.2\"" | |
Nov 04 15:14:22 minikube kubelet[2093]: E1104 15:14:22.075169 2093 pod_workers.go:191] Error syncing pod 175606cb-fd1f-453f-a4eb-6e93bdcffba5 ("ingress-nginx-admission-patch-k74gx_kube-system(175606cb-fd1f-453f-a4eb-6e93bdcffba5)"), skipping: failed to "StartContainer" for "patch" with ImagePullBackOff: "Back-off pulling image \"jettech/kube-webhook-certgen:v1.2.2\"" | |
Nov 04 15:14:26 minikube kubelet[2093]: E1104 15:14:26.076474 2093 pod_workers.go:191] Error syncing pod f7b92a1c-5b2c-4bc8-8e79-ef944b4699a5 ("ingress-nginx-admission-create-tr846_kube-system(f7b92a1c-5b2c-4bc8-8e79-ef944b4699a5)"), skipping: failed to "StartContainer" for "create" with ImagePullBackOff: "Back-off pulling image \"jettech/kube-webhook-certgen:v1.2.2\"" | |
Nov 04 15:14:37 minikube kubelet[2093]: E1104 15:14:37.078174 2093 pod_workers.go:191] Error syncing pod 175606cb-fd1f-453f-a4eb-6e93bdcffba5 ("ingress-nginx-admission-patch-k74gx_kube-system(175606cb-fd1f-453f-a4eb-6e93bdcffba5)"), skipping: failed to "StartContainer" for "patch" with ImagePullBackOff: "Back-off pulling image \"jettech/kube-webhook-certgen:v1.2.2\"" | |
Nov 04 15:14:41 minikube kubelet[2093]: E1104 15:14:41.074424 2093 pod_workers.go:191] Error syncing pod f7b92a1c-5b2c-4bc8-8e79-ef944b4699a5 ("ingress-nginx-admission-create-tr846_kube-system(f7b92a1c-5b2c-4bc8-8e79-ef944b4699a5)"), skipping: failed to "StartContainer" for "create" with ImagePullBackOff: "Back-off pulling image \"jettech/kube-webhook-certgen:v1.2.2\"" | |
Nov 04 15:14:48 minikube kubelet[2093]: E1104 15:14:48.074966 2093 pod_workers.go:191] Error syncing pod 175606cb-fd1f-453f-a4eb-6e93bdcffba5 ("ingress-nginx-admission-patch-k74gx_kube-system(175606cb-fd1f-453f-a4eb-6e93bdcffba5)"), skipping: failed to "StartContainer" for "patch" with ImagePullBackOff: "Back-off pulling image \"jettech/kube-webhook-certgen:v1.2.2\"" | |
Nov 04 15:14:55 minikube kubelet[2093]: E1104 15:14:55.075404 2093 pod_workers.go:191] Error syncing pod f7b92a1c-5b2c-4bc8-8e79-ef944b4699a5 ("ingress-nginx-admission-create-tr846_kube-system(f7b92a1c-5b2c-4bc8-8e79-ef944b4699a5)"), skipping: failed to "StartContainer" for "create" with ImagePullBackOff: "Back-off pulling image \"jettech/kube-webhook-certgen:v1.2.2\"" | |
Nov 04 15:15:03 minikube kubelet[2093]: E1104 15:15:03.075097 2093 pod_workers.go:191] Error syncing pod 175606cb-fd1f-453f-a4eb-6e93bdcffba5 ("ingress-nginx-admission-patch-k74gx_kube-system(175606cb-fd1f-453f-a4eb-6e93bdcffba5)"), skipping: failed to "StartContainer" for "patch" with ImagePullBackOff: "Back-off pulling image \"jettech/kube-webhook-certgen:v1.2.2\"" | |
Nov 04 15:15:06 minikube kubelet[2093]: E1104 15:15:06.074452 2093 pod_workers.go:191] Error syncing pod f7b92a1c-5b2c-4bc8-8e79-ef944b4699a5 ("ingress-nginx-admission-create-tr846_kube-system(f7b92a1c-5b2c-4bc8-8e79-ef944b4699a5)"), skipping: failed to "StartContainer" for "create" with ImagePullBackOff: "Back-off pulling image \"jettech/kube-webhook-certgen:v1.2.2\"" | |
Nov 04 15:15:14 minikube kubelet[2093]: E1104 15:15:14.075162 2093 pod_workers.go:191] Error syncing pod 175606cb-fd1f-453f-a4eb-6e93bdcffba5 ("ingress-nginx-admission-patch-k74gx_kube-system(175606cb-fd1f-453f-a4eb-6e93bdcffba5)"), skipping: failed to "StartContainer" for "patch" with ImagePullBackOff: "Back-off pulling image \"jettech/kube-webhook-certgen:v1.2.2\"" | |
Nov 04 15:15:36 minikube kubelet[2093]: E1104 15:15:36.093701 2093 remote_image.go:113] PullImage "jettech/kube-webhook-certgen:v1.2.2" from image service failed: rpc error: code = Unknown desc = Error response from daemon: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) | |
Nov 04 15:15:36 minikube kubelet[2093]: E1104 15:15:36.094079 2093 kuberuntime_image.go:51] Pull image "jettech/kube-webhook-certgen:v1.2.2" failed: rpc error: code = Unknown desc = Error response from daemon: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) | |
Nov 04 15:15:36 minikube kubelet[2093]: E1104 15:15:36.095213 2093 kuberuntime_manager.go:804] container &Container{Name:create,Image:jettech/kube-webhook-certgen:v1.2.2,Command:[],Args:[create --host=ingress-nginx-controller-admission,ingress-nginx-controller-admission.kube-system.svc --namespace=kube-system --secret-name=ingress-nginx-admission],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ingress-nginx-admission-token-9rsm6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod ingress-nginx-admission-create-tr846_kube-system(f7b92a1c-5b2c-4bc8-8e79-ef944b4699a5): ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) | |
Nov 04 15:15:36 minikube kubelet[2093]: E1104 15:15:36.095490 2093 pod_workers.go:191] Error syncing pod f7b92a1c-5b2c-4bc8-8e79-ef944b4699a5 ("ingress-nginx-admission-create-tr846_kube-system(f7b92a1c-5b2c-4bc8-8e79-ef944b4699a5)"), skipping: failed to "StartContainer" for "create" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" | |
Nov 04 15:15:41 minikube kubelet[2093]: E1104 15:15:41.091043 2093 remote_image.go:113] PullImage "jettech/kube-webhook-certgen:v1.2.2" from image service failed: rpc error: code = Unknown desc = Error response from daemon: Get https://registry-1.docker.io/v2/: dial tcp: lookup registry-1.docker.io on 192.168.49.1:53: read udp 192.168.49.2:39476->192.168.49.1:53: i/o timeout | |
Nov 04 15:15:41 minikube kubelet[2093]: E1104 15:15:41.091124 2093 kuberuntime_image.go:51] Pull image "jettech/kube-webhook-certgen:v1.2.2" failed: rpc error: code = Unknown desc = Error response from daemon: Get https://registry-1.docker.io/v2/: dial tcp: lookup registry-1.docker.io on 192.168.49.1:53: read udp 192.168.49.2:39476->192.168.49.1:53: i/o timeout | |
Nov 04 15:15:41 minikube kubelet[2093]: E1104 15:15:41.091359 2093 kuberuntime_manager.go:804] container &Container{Name:patch,Image:jettech/kube-webhook-certgen:v1.2.2,Command:[],Args:[patch --webhook-name=ingress-nginx-admission --namespace=kube-system --patch-mutating=false --secret-name=ingress-nginx-admission --patch-failure-policy=Fail],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ingress-nginx-admission-token-9rsm6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod ingress-nginx-admission-patch-k74gx_kube-system(175606cb-fd1f-453f-a4eb-6e93bdcffba5): ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get https://registry-1.docker.io/v2/: dial tcp: lookup registry-1.docker.io on 192.168.49.1:53: read udp 192.168.49.2:39476->192.168.49.1:53: i/o timeout | |
Nov 04 15:15:41 minikube kubelet[2093]: E1104 15:15:41.091423 2093 pod_workers.go:191] Error syncing pod 175606cb-fd1f-453f-a4eb-6e93bdcffba5 ("ingress-nginx-admission-patch-k74gx_kube-system(175606cb-fd1f-453f-a4eb-6e93bdcffba5)"), skipping: failed to "StartContainer" for "patch" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get https://registry-1.docker.io/v2/: dial tcp: lookup registry-1.docker.io on 192.168.49.1:53: read udp 192.168.49.2:39476->192.168.49.1:53: i/o timeout" | |
Nov 04 15:15:42 minikube kubelet[2093]: E1104 15:15:42.623685 2093 secret.go:195] Couldn't get secret kube-system/ingress-nginx-admission: secret "ingress-nginx-admission" not found | |
Nov 04 15:15:42 minikube kubelet[2093]: E1104 15:15:42.624469 2093 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/secret/209b8809-ed33-4c6c-9247-8a652bb5f761-webhook-cert podName:209b8809-ed33-4c6c-9247-8a652bb5f761 nodeName:}" failed. No retries permitted until 2020-11-04 15:17:44.624417912 +0000 UTC m=+2683.427488552 (durationBeforeRetry 2m2s). Error: "MountVolume.SetUp failed for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/209b8809-ed33-4c6c-9247-8a652bb5f761-webhook-cert\") pod \"ingress-nginx-controller-799c9469f7-2j2mk\" (UID: \"209b8809-ed33-4c6c-9247-8a652bb5f761\") : secret \"ingress-nginx-admission\" not found" | |
Nov 04 15:15:48 minikube kubelet[2093]: E1104 15:15:48.075536 2093 pod_workers.go:191] Error syncing pod f7b92a1c-5b2c-4bc8-8e79-ef944b4699a5 ("ingress-nginx-admission-create-tr846_kube-system(f7b92a1c-5b2c-4bc8-8e79-ef944b4699a5)"), skipping: failed to "StartContainer" for "create" with ImagePullBackOff: "Back-off pulling image \"jettech/kube-webhook-certgen:v1.2.2\"" | |
Nov 04 15:15:54 minikube kubelet[2093]: E1104 15:15:54.078682 2093 pod_workers.go:191] Error syncing pod 175606cb-fd1f-453f-a4eb-6e93bdcffba5 ("ingress-nginx-admission-patch-k74gx_kube-system(175606cb-fd1f-453f-a4eb-6e93bdcffba5)"), skipping: failed to "StartContainer" for "patch" with ImagePullBackOff: "Back-off pulling image \"jettech/kube-webhook-certgen:v1.2.2\"" | |
Nov 04 15:16:01 minikube kubelet[2093]: E1104 15:16:01.082177 2093 pod_workers.go:191] Error syncing pod f7b92a1c-5b2c-4bc8-8e79-ef944b4699a5 ("ingress-nginx-admission-create-tr846_kube-system(f7b92a1c-5b2c-4bc8-8e79-ef944b4699a5)"), skipping: failed to "StartContainer" for "create" with ImagePullBackOff: "Back-off pulling image \"jettech/kube-webhook-certgen:v1.2.2\"" | |
Nov 04 15:16:05 minikube kubelet[2093]: E1104 15:16:05.078012 2093 pod_workers.go:191] Error syncing pod 175606cb-fd1f-453f-a4eb-6e93bdcffba5 ("ingress-nginx-admission-patch-k74gx_kube-system(175606cb-fd1f-453f-a4eb-6e93bdcffba5)"), skipping: failed to "StartContainer" for "patch" with ImagePullBackOff: "Back-off pulling image \"jettech/kube-webhook-certgen:v1.2.2\"" | |
Nov 04 15:16:14 minikube kubelet[2093]: E1104 15:16:14.078140 2093 pod_workers.go:191] Error syncing pod f7b92a1c-5b2c-4bc8-8e79-ef944b4699a5 ("ingress-nginx-admission-create-tr846_kube-system(f7b92a1c-5b2c-4bc8-8e79-ef944b4699a5)"), skipping: failed to "StartContainer" for "create" with ImagePullBackOff: "Back-off pulling image \"jettech/kube-webhook-certgen:v1.2.2\"" | |
Nov 04 15:16:19 minikube kubelet[2093]: E1104 15:16:19.075076 2093 pod_workers.go:191] Error syncing pod 175606cb-fd1f-453f-a4eb-6e93bdcffba5 ("ingress-nginx-admission-patch-k74gx_kube-system(175606cb-fd1f-453f-a4eb-6e93bdcffba5)"), skipping: failed to "StartContainer" for "patch" with ImagePullBackOff: "Back-off pulling image \"jettech/kube-webhook-certgen:v1.2.2\"" | |
Nov 04 15:16:20 minikube kubelet[2093]: E1104 15:16:20.073906 2093 kubelet.go:1594] Unable to attach or mount volumes for pod "ingress-nginx-controller-799c9469f7-2j2mk_kube-system(209b8809-ed33-4c6c-9247-8a652bb5f761)": unmounted volumes=[webhook-cert], unattached volumes=[webhook-cert ingress-nginx-token-tb4kg]: timed out waiting for the condition; skipping pod | |
Nov 04 15:16:20 minikube kubelet[2093]: E1104 15:16:20.074054 2093 pod_workers.go:191] Error syncing pod 209b8809-ed33-4c6c-9247-8a652bb5f761 ("ingress-nginx-controller-799c9469f7-2j2mk_kube-system(209b8809-ed33-4c6c-9247-8a652bb5f761)"), skipping: unmounted volumes=[webhook-cert], unattached volumes=[webhook-cert ingress-nginx-token-tb4kg]: timed out waiting for the condition | |
Nov 04 15:16:25 minikube kubelet[2093]: E1104 15:16:25.079037 2093 pod_workers.go:191] Error syncing pod f7b92a1c-5b2c-4bc8-8e79-ef944b4699a5 ("ingress-nginx-admission-create-tr846_kube-system(f7b92a1c-5b2c-4bc8-8e79-ef944b4699a5)"), skipping: failed to "StartContainer" for "create" with ImagePullBackOff: "Back-off pulling image \"jettech/kube-webhook-certgen:v1.2.2\"" | |
Nov 04 15:16:33 minikube kubelet[2093]: E1104 15:16:33.079060 2093 pod_workers.go:191] Error syncing pod 175606cb-fd1f-453f-a4eb-6e93bdcffba5 ("ingress-nginx-admission-patch-k74gx_kube-system(175606cb-fd1f-453f-a4eb-6e93bdcffba5)"), skipping: failed to "StartContainer" for "patch" with ImagePullBackOff: "Back-off pulling image \"jettech/kube-webhook-certgen:v1.2.2\"" | |
Nov 04 15:16:40 minikube kubelet[2093]: E1104 15:16:40.074938 2093 pod_workers.go:191] Error syncing pod f7b92a1c-5b2c-4bc8-8e79-ef944b4699a5 ("ingress-nginx-admission-create-tr846_kube-system(f7b92a1c-5b2c-4bc8-8e79-ef944b4699a5)"), skipping: failed to "StartContainer" for "create" with ImagePullBackOff: "Back-off pulling image \"jettech/kube-webhook-certgen:v1.2.2\"" | |
Nov 04 15:16:47 minikube kubelet[2093]: E1104 15:16:47.080505 2093 pod_workers.go:191] Error syncing pod 175606cb-fd1f-453f-a4eb-6e93bdcffba5 ("ingress-nginx-admission-patch-k74gx_kube-system(175606cb-fd1f-453f-a4eb-6e93bdcffba5)"), skipping: failed to "StartContainer" for "patch" with ImagePullBackOff: "Back-off pulling image \"jettech/kube-webhook-certgen:v1.2.2\"" | |
Nov 04 15:16:55 minikube kubelet[2093]: E1104 15:16:55.078700 2093 pod_workers.go:191] Error syncing pod f7b92a1c-5b2c-4bc8-8e79-ef944b4699a5 ("ingress-nginx-admission-create-tr846_kube-system(f7b92a1c-5b2c-4bc8-8e79-ef944b4699a5)"), skipping: failed to "StartContainer" for "create" with ImagePullBackOff: "Back-off pulling image \"jettech/kube-webhook-certgen:v1.2.2\"" | |
Nov 04 15:17:00 minikube kubelet[2093]: E1104 15:17:00.078151 2093 pod_workers.go:191] Error syncing pod 175606cb-fd1f-453f-a4eb-6e93bdcffba5 ("ingress-nginx-admission-patch-k74gx_kube-system(175606cb-fd1f-453f-a4eb-6e93bdcffba5)"), skipping: failed to "StartContainer" for "patch" with ImagePullBackOff: "Back-off pulling image \"jettech/kube-webhook-certgen:v1.2.2\"" | |
==> storage-provisioner [d0eb30e789a6] <== | |
I1104 14:33:21.299712 1 leaderelection.go:242] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath... | |
I1104 14:33:21.334908 1 leaderelection.go:252] successfully acquired lease kube-system/k8s.io-minikube-hostpath | |
I1104 14:33:21.335684 1 controller.go:799] Starting provisioner controller k8s.io/minikube-hostpath_minikube_9bf27be1-4099-4c6c-81c3-cc0bd9010bfd! | |
I1104 14:33:21.335667 1 event.go:281] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"48f00c25-5150-438d-80be-89922d2f9460", APIVersion:"v1", ResourceVersion:"452", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' minikube_9bf27be1-4099-4c6c-81c3-cc0bd9010bfd became leader | |
I1104 14:33:21.436561 1 controller.go:848] Started provisioner controller k8s.io/minikube-hostpath_minikube_9bf27be1-4099-4c6c-81c3-cc0bd9010bfd! |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment