Created
March 6, 2020 06:04
-
-
Save willzhang/9b11a8b4fa216e6eb5bc6a5b4bfbe83a to your computer and use it in GitHub Desktop.
v1.17.3-e2e.log
This file has been truncated, but you can view the full file.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
I0306 02:38:59.604733 19 test_context.go:406] Using a temporary kubeconfig file from in-cluster config : /tmp/kubeconfig-780690759 | |
I0306 02:38:59.604746 19 test_context.go:419] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready | |
I0306 02:38:59.604856 19 e2e.go:109] Starting e2e run "e0336c13-b471-4627-93ef-421cefc2a866" on Ginkgo node 1 | |
{"msg":"Test Suite starting","total":278,"completed":0,"skipped":0,"failed":0} | |
Running Suite: Kubernetes e2e suite | |
=================================== | |
Random Seed: 1583462338 - Will randomize all specs | |
Will run 278 of 4843 specs | |
Mar 6 02:38:59.650: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 | |
Mar 6 02:38:59.652: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable | |
Mar 6 02:38:59.663: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready | |
Mar 6 02:38:59.693: INFO: 22 / 22 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) | |
Mar 6 02:38:59.693: INFO: expected 3 pod replicas in namespace 'kube-system', 3 are Running and Ready. | |
Mar 6 02:38:59.693: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start | |
Mar 6 02:38:59.700: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-amd64' (0 seconds elapsed) | |
Mar 6 02:38:59.700: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm' (0 seconds elapsed) | |
Mar 6 02:38:59.700: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm64' (0 seconds elapsed) | |
Mar 6 02:38:59.700: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-ppc64le' (0 seconds elapsed) | |
Mar 6 02:38:59.700: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-s390x' (0 seconds elapsed) | |
Mar 6 02:38:59.700: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) | |
Mar 6 02:38:59.700: INFO: e2e test version: v1.17.3 | |
Mar 6 02:38:59.705: INFO: kube-apiserver version: v1.17.3 | |
Mar 6 02:38:59.705: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 | |
Mar 6 02:38:59.709: INFO: Cluster IP family: ipv4 | |
SSS | |
------------------------------ | |
[sig-api-machinery] Garbage collector | |
should delete RS created by deployment when not orphaning [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
[BeforeEach] [sig-api-machinery] Garbage collector | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 | |
STEP: Creating a kubernetes client | |
Mar 6 02:38:59.709: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 | |
STEP: Building a namespace api object, basename gc | |
Mar 6 02:38:59.738: INFO: Found PodSecurityPolicies; assuming PodSecurityPolicy is enabled. | |
Mar 6 02:38:59.822: INFO: Found ClusterRoles; assuming RBAC is enabled. | |
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in gc-4031 | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] should delete RS created by deployment when not orphaning [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
STEP: create the deployment | |
STEP: Wait for the Deployment to create new ReplicaSet | |
STEP: delete the deployment | |
STEP: wait for all rs to be garbage collected | |
STEP: expected 0 rs, got 1 rs | |
STEP: expected 0 pods, got 2 pods | |
STEP: Gathering metrics | |
Mar 6 02:39:01.023: INFO: For apiserver_request_total: | |
For apiserver_request_latency_seconds: | |
For apiserver_init_events_total: | |
For garbage_collector_attempt_to_delete_queue_latency: | |
For garbage_collector_attempt_to_delete_work_duration: | |
For garbage_collector_attempt_to_orphan_queue_latency: | |
For garbage_collector_attempt_to_orphan_work_duration: | |
For garbage_collector_dirty_processing_latency_microseconds: | |
For garbage_collector_event_processing_latency_microseconds: | |
For garbage_collector_graph_changes_queue_latency: | |
For garbage_collector_graph_changes_work_duration: | |
For garbage_collector_orphan_processing_latency_microseconds: | |
For namespace_queue_latency: | |
For namespace_queue_latency_sum: | |
For namespace_queue_latency_count: | |
For namespace_retries: | |
For namespace_work_duration: | |
For namespace_work_duration_sum: | |
For namespace_work_duration_count: | |
For function_duration_seconds: | |
For errors_total: | |
For evicted_pods_total: | |
[AfterEach] [sig-api-machinery] Garbage collector | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 | |
Mar 6 02:39:01.024: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
W0306 02:39:01.023932 19 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. | |
STEP: Destroying namespace "gc-4031" for this suite. | |
•{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":278,"completed":1,"skipped":3,"failed":0} | |
------------------------------ | |
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook | |
should execute prestop exec hook properly [NodeConformance] [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
[BeforeEach] [k8s.io] Container Lifecycle Hook | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 | |
STEP: Creating a kubernetes client | |
Mar 6 02:39:01.029: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 | |
STEP: Building a namespace api object, basename container-lifecycle-hook | |
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-lifecycle-hook-5282 | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] when create a pod with lifecycle hook | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 | |
STEP: create the container to handle the HTTPGet hook request. | |
[It] should execute prestop exec hook properly [NodeConformance] [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
STEP: create the pod with lifecycle hook | |
STEP: delete the pod with lifecycle hook | |
Mar 6 02:39:15.186: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear | |
Mar 6 02:39:15.188: INFO: Pod pod-with-prestop-exec-hook still exists | |
Mar 6 02:39:17.188: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear | |
Mar 6 02:39:17.190: INFO: Pod pod-with-prestop-exec-hook still exists | |
Mar 6 02:39:19.188: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear | |
Mar 6 02:39:19.190: INFO: Pod pod-with-prestop-exec-hook still exists | |
Mar 6 02:39:21.188: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear | |
Mar 6 02:39:21.191: INFO: Pod pod-with-prestop-exec-hook still exists | |
Mar 6 02:39:23.188: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear | |
Mar 6 02:39:23.190: INFO: Pod pod-with-prestop-exec-hook still exists | |
Mar 6 02:39:25.188: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear | |
Mar 6 02:39:25.190: INFO: Pod pod-with-prestop-exec-hook no longer exists | |
STEP: check prestop hook | |
[AfterEach] [k8s.io] Container Lifecycle Hook | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 | |
Mar 6 02:39:25.208: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
STEP: Destroying namespace "container-lifecycle-hook-5282" for this suite. | |
• [SLOW TEST:24.187 seconds] | |
[k8s.io] Container Lifecycle Hook | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 | |
when create a pod with lifecycle hook | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 | |
should execute prestop exec hook properly [NodeConformance] [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
------------------------------ | |
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":278,"completed":2,"skipped":3,"failed":0} | |
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] | |
works for CRD without validation schema [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 | |
STEP: Creating a kubernetes client | |
Mar 6 02:39:25.217: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 | |
STEP: Building a namespace api object, basename crd-publish-openapi | |
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in crd-publish-openapi-6499 | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] works for CRD without validation schema [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
Mar 6 02:39:25.346: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 | |
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties | |
Mar 6 02:39:33.107: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 --namespace=crd-publish-openapi-6499 create -f -' | |
Mar 6 02:39:33.436: INFO: stderr: "" | |
Mar 6 02:39:33.436: INFO: stdout: "e2e-test-crd-publish-openapi-4043-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" | |
Mar 6 02:39:33.436: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 --namespace=crd-publish-openapi-6499 delete e2e-test-crd-publish-openapi-4043-crds test-cr' | |
Mar 6 02:39:33.545: INFO: stderr: "" | |
Mar 6 02:39:33.545: INFO: stdout: "e2e-test-crd-publish-openapi-4043-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" | |
Mar 6 02:39:33.545: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 --namespace=crd-publish-openapi-6499 apply -f -' | |
Mar 6 02:39:33.680: INFO: stderr: "" | |
Mar 6 02:39:33.680: INFO: stdout: "e2e-test-crd-publish-openapi-4043-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" | |
Mar 6 02:39:33.680: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 --namespace=crd-publish-openapi-6499 delete e2e-test-crd-publish-openapi-4043-crds test-cr' | |
Mar 6 02:39:33.753: INFO: stderr: "" | |
Mar 6 02:39:33.753: INFO: stdout: "e2e-test-crd-publish-openapi-4043-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" | |
STEP: kubectl explain works to explain CR without validation schema | |
Mar 6 02:39:33.753: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 explain e2e-test-crd-publish-openapi-4043-crds' | |
Mar 6 02:39:33.887: INFO: stderr: "" | |
Mar 6 02:39:33.887: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-4043-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n <empty>\n" | |
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 | |
Mar 6 02:39:36.648: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
STEP: Destroying namespace "crd-publish-openapi-6499" for this suite. | |
• [SLOW TEST:11.439 seconds] | |
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 | |
works for CRD without validation schema [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
------------------------------ | |
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":278,"completed":3,"skipped":3,"failed":0} | |
SSSSSSSSSSSSSSSSSSS | |
------------------------------ | |
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] | |
should honor timeout [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 | |
STEP: Creating a kubernetes client | |
Mar 6 02:39:36.655: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 | |
STEP: Building a namespace api object, basename webhook | |
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-618 | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 | |
STEP: Setting up server cert | |
STEP: Create role binding to let webhook read extension-apiserver-authentication | |
STEP: Deploying the webhook pod | |
STEP: Wait for the deployment to be ready | |
Mar 6 02:39:37.141: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set | |
STEP: Deploying the webhook service | |
STEP: Verifying the service has paired with the endpoint | |
Mar 6 02:39:40.159: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 | |
[It] should honor timeout [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
STEP: Setting timeout (1s) shorter than webhook latency (5s) | |
STEP: Registering slow webhook via the AdmissionRegistration API | |
Mar 6 02:39:40.228: INFO: Waiting for webhook configuration to be ready... | |
Mar 6 02:39:50.342: INFO: Waiting for webhook configuration to be ready... | |
Mar 6 02:40:00.439: INFO: Waiting for webhook configuration to be ready... | |
Mar 6 02:40:10.540: INFO: Waiting for webhook configuration to be ready... | |
Mar 6 02:40:20.550: INFO: Waiting for webhook configuration to be ready... | |
Mar 6 02:40:20.550: FAIL: waiting for webhook configuration to be ready | |
Unexpected error: | |
<*errors.errorString | 0xc0000b3950>: { | |
s: "timed out waiting for the condition", | |
} | |
timed out waiting for the condition | |
occurred | |
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 | |
STEP: Collecting events from namespace "webhook-618". | |
STEP: Found 6 events. | |
Mar 6 02:40:20.553: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for sample-webhook-deployment-5f65f8c764-d2xcb: {default-scheduler } Scheduled: Successfully assigned webhook-618/sample-webhook-deployment-5f65f8c764-d2xcb to worker02 | |
Mar 6 02:40:20.553: INFO: At 2020-03-06 02:39:37 +0000 UTC - event for sample-webhook-deployment: {deployment-controller } ScalingReplicaSet: Scaled up replica set sample-webhook-deployment-5f65f8c764 to 1 | |
Mar 6 02:40:20.553: INFO: At 2020-03-06 02:39:37 +0000 UTC - event for sample-webhook-deployment-5f65f8c764: {replicaset-controller } SuccessfulCreate: Created pod: sample-webhook-deployment-5f65f8c764-d2xcb | |
Mar 6 02:40:20.553: INFO: At 2020-03-06 02:39:37 +0000 UTC - event for sample-webhook-deployment-5f65f8c764-d2xcb: {kubelet worker02} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.8" already present on machine | |
Mar 6 02:40:20.553: INFO: At 2020-03-06 02:39:37 +0000 UTC - event for sample-webhook-deployment-5f65f8c764-d2xcb: {kubelet worker02} Created: Created container sample-webhook | |
Mar 6 02:40:20.553: INFO: At 2020-03-06 02:39:38 +0000 UTC - event for sample-webhook-deployment-5f65f8c764-d2xcb: {kubelet worker02} Started: Started container sample-webhook | |
Mar 6 02:40:20.555: INFO: POD NODE PHASE GRACE CONDITIONS | |
Mar 6 02:40:20.555: INFO: sample-webhook-deployment-5f65f8c764-d2xcb worker02 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-06 02:39:37 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-06 02:39:38 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-06 02:39:38 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-06 02:39:37 +0000 UTC }] | |
Mar 6 02:40:20.556: INFO: | |
Mar 6 02:40:20.558: INFO: | |
Logging node info for node master01 | |
Mar 6 02:40:20.560: INFO: Node Info: &Node{ObjectMeta:{master01 /api/v1/nodes/master01 aeae8a5b-4e17-4702-bb02-bcfde6cdb12a 2910 0 2020-03-06 02:29:18 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master01 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"76:15:82:0d:8b:ab"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:192.168.1.247 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{42140479488 0} {<nil>} 41152812Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3823214592 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{37926431477 0} {<nil>} 37926431477 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3718356992 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-03-06 02:38:54 +0000 UTC,LastTransitionTime:2020-03-06 02:29:18 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-03-06 02:38:54 +0000 UTC,LastTransitionTime:2020-03-06 02:29:18 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-03-06 02:38:54 +0000 UTC,LastTransitionTime:2020-03-06 02:29:18 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-03-06 02:38:54 +0000 UTC,LastTransitionTime:2020-03-06 02:30:33 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:192.168.1.247,},NodeAddress{Type:Hostname,Address:master01,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:20200220105402131453637367482142,SystemUUID:195205FE-EE72-4794-8EAA-AC554EFDEC9B,BootID:6a3bf627-7476-4f52-84fa-f3eab6d26427,KernelVersion:3.10.0-1062.12.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.5,KubeletVersion:v1.17.3,KubeProxyVersion:v1.17.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[192.168.1.252/library/k8s-keepalived@sha256:3db0032ef2feef675710595681cf9463470af179cd324c6773e831b6649ef785 192.168.1.252/library/k8s-keepalived:1.3.5],SizeBytes:356553439,},ContainerImage{Names:[sonobuoy/systemd-logs@sha256:fadad24a66ddd544987c38811108a73d1a306dd3b5e3f090b207786f2825ffde sonobuoy/systemd-logs:v0.3],SizeBytes:297365055,},ContainerImage{Names:[192.168.1.252/library/etcd-amd64@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216 192.168.1.252/library/etcd-amd64:3.4.3-0],SizeBytes:288426917,},ContainerImage{Names:[192.168.1.252/library/node@sha256:3226b047a7034918a05c986347c5fb4d2cce6d0844f325851bfba586271ee617 192.168.1.252/library/node:v3.12.0],SizeBytes:257501722,},ContainerImage{Names:[192.168.1.252/library/cni@sha256:dc3bc525f1d3b794db1f2a7ceb7d8b84699d13e1431fbc117063f7e2075ff4b5 192.168.1.252/library/cni:v3.12.0],SizeBytes:206678344,},ContainerImage{Names:[192.168.1.252/library/kube-apiserver@sha256:4ee4113bcce32ae436b15364b09a2439dee15f6a19bbcf43a470d3dc879b0c62 192.168.1.252/library/kube-apiserver:v1.17.3],SizeBytes:170986003,},ContainerImage{Names:[192.168.1.252/library/envoy@sha256:b36ee021fc4d285de7861dbaee01e7437ce1d63814ead6ae3e4dfcad4a951b2e 192.168.1.252/library/envoy:v1.12.2],SizeBytes:170487454,},ContainerImage{Names:[192.168.1.252/library/kube-controller-manager@sha256:8f03391c0d22e3da8d22725178efe3c4338e1920504f7f4eb4a1c7f5f40c4c6e 192.168.1.252/library/kube-controller-manager:v1.17.3],SizeBytes:160918035,},ContainerImage{Names:[192.168.1.252/library/dashboard@sha256:4e0d39dae7e089b77fe2bbcef648f89905716db9c1f0884950bfd42d9f446c29 192.168.1.252/library/dashboard:v2.0.0-rc5],SizeBytes:126359420,},ContainerImage{Names:[192.168.1.252/library/kube-proxy@sha256:8dfd672298f8fbcd37731e0f2e45be87199cee6ee1f482ffec766693dcec0ec4 192.168.1.252/library/kube-proxy:v1.17.3],SizeBytes:115964919,},ContainerImage{Names:[192.168.1.252/library/pod2daemon-flexvol@sha256:2bf967507ad1adb749f3484b5d39e7d7b8700c4a0f836e8093dae5c57a585ccf 192.168.1.252/library/pod2daemon-flexvol:v3.12.0],SizeBytes:111122324,},ContainerImage{Names:[192.168.1.252/library/kube-scheduler@sha256:eea535b58da5b2fb66b32c61c6913d014cf85061a39527ad9bca2fa84b53dc1c 192.168.1.252/library/kube-scheduler:v1.17.3],SizeBytes:94435859,},ContainerImage{Names:[sonobuoy/sonobuoy@sha256:73f9cfe546ac6d0d5b94308293484c29c3b02fc341a975cfca97c80dd8728ed7 sonobuoy/sonobuoy:v0.17.2],SizeBytes:84339798,},ContainerImage{Names:[192.168.1.252/library/k8s-haproxy@sha256:f88bca67a2782e7bd88af168d5c11216c32104d12fe9240fac54a1d3196e3f9c 192.168.1.252/library/k8s-haproxy:2.0.0],SizeBytes:73550856,},ContainerImage{Names:[192.168.1.252/library/kube-controllers@sha256:edf14a5bcc663d2b0013b1830469626b7aa27206cbc7715ed83c042890ca5837 192.168.1.252/library/kube-controllers:v3.12.0],SizeBytes:56567983,},ContainerImage{Names:[192.168.1.252/library/typha@sha256:3baf9aef445a3224160748d6f560426eab798d6c65620020b2466e114bf6805f 192.168.1.252/library/typha:v3.12.0],SizeBytes:56034822,},ContainerImage{Names:[192.168.1.252/library/flannel@sha256:bd76b84c74ad70368a2341c2402841b75950df881388e43fc2aca000c546653a 192.168.1.252/library/flannel:v0.11.0-amd64],SizeBytes:52567296,},ContainerImage{Names:[192.168.1.252/library/ctl@sha256:128e4c95cf92a482496d591c43cad2a6a21fab1f0e8a8f13e8503f1324106dc8 192.168.1.252/library/ctl:v3.12.0],SizeBytes:47895826,},ContainerImage{Names:[192.168.1.252/library/coredns@sha256:608ac7ccba5ce41c6941fca13bc67059c1eef927fd968b554b790e21cc92543c 192.168.1.252/library/coredns:1.6.5],SizeBytes:41578211,},ContainerImage{Names:[192.168.1.252/library/metrics-scraper@sha256:e24a74b3b1cdc84d6285d507a12eb06907fd8c457b3e8ae9baa9418eca43efc4 192.168.1.252/library/metrics-scraper:v1.0.3],SizeBytes:40105664,},ContainerImage{Names:[192.168.1.252/library/metrics-server-amd64@sha256:c9c4e95068b51d6b33a9dccc61875df07dc650abbf4ac1a19d58b4628f89288b 192.168.1.252/library/metrics-server-amd64:v0.3.6],SizeBytes:39944451,},ContainerImage{Names:[192.168.1.252/library/contour@sha256:3e10c69dfeaa830b84a50e6b47ce90e0f5a1aa84daf77f7662313077fa9579cf 192.168.1.252/library/contour:v1.1.0],SizeBytes:35721216,},ContainerImage{Names:[192.168.1.252/library/kuard-amd64@sha256:bd17153e9a3319f401acc7a27759243f37d422c06cbbf01cb3e1f54bbbfe14f4 192.168.1.252/library/kuard-amd64:1],SizeBytes:19745911,},ContainerImage{Names:[192.168.1.252/library/pause@sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d 192.168.1.252/library/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} | |
Mar 6 02:40:20.560: INFO: | |
Logging kubelet events for node master01 | |
Mar 6 02:40:20.564: INFO: | |
Logging pods the kubelet thinks is on node master01 | |
Mar 6 02:40:20.576: INFO: kube-flannel-ds-amd64-6mbnb started at 2020-03-06 02:30:22 +0000 UTC (1+1 container statuses recorded) | |
Mar 6 02:40:20.576: INFO: Init container install-cni ready: true, restart count 0 | |
Mar 6 02:40:20.576: INFO: Container kube-flannel ready: true, restart count 0 | |
Mar 6 02:40:20.576: INFO: kube-proxy-4j8ft started at 2020-03-06 02:30:22 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 02:40:20.576: INFO: Container kube-proxy ready: true, restart count 0 | |
Mar 6 02:40:20.576: INFO: kube-apiserver-master01 started at 2020-03-06 02:30:22 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 02:40:20.576: INFO: Container kube-apiserver ready: true, restart count 0 | |
Mar 6 02:40:20.576: INFO: kube-controller-manager-master01 started at 2020-03-06 02:30:22 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 02:40:20.576: INFO: Container kube-controller-manager ready: true, restart count 1 | |
Mar 6 02:40:20.576: INFO: kube-scheduler-master01 started at 2020-03-06 02:30:22 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 02:40:20.576: INFO: Container kube-scheduler ready: true, restart count 1 | |
Mar 6 02:40:20.576: INFO: sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-drhpn started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded) | |
Mar 6 02:40:20.576: INFO: Container sonobuoy-worker ready: true, restart count 0 | |
Mar 6 02:40:20.576: INFO: Container systemd-logs ready: true, restart count 0 | |
W0306 02:40:20.579790 19 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. | |
Mar 6 02:40:20.605: INFO: | |
Latency metrics for node master01 | |
Mar 6 02:40:20.605: INFO: | |
Logging node info for node master02 | |
Mar 6 02:40:20.607: INFO: Node Info: &Node{ObjectMeta:{master02 /api/v1/nodes/master02 6a0ecb6f-ef31-4754-858b-3eba76999224 2904 0 2020-03-06 02:29:17 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master02 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"82:c1:38:99:3b:39"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:192.168.1.248 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{42140479488 0} {<nil>} 41152812Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3823214592 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{37926431477 0} {<nil>} 37926431477 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3718356992 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-03-06 02:38:51 +0000 UTC,LastTransitionTime:2020-03-06 02:29:16 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-03-06 02:38:51 +0000 UTC,LastTransitionTime:2020-03-06 02:29:16 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-03-06 02:38:51 +0000 UTC,LastTransitionTime:2020-03-06 02:29:16 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-03-06 02:38:51 +0000 UTC,LastTransitionTime:2020-03-06 02:30:10 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:192.168.1.248,},NodeAddress{Type:Hostname,Address:master02,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:20200220105402131453637367482142,SystemUUID:8B2C4639-6D22-4D0D-A03C-F6D7E328F9D5,BootID:efd7329f-ae31-4806-ba13-7fdd5fad57df,KernelVersion:3.10.0-1062.12.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.5,KubeletVersion:v1.17.3,KubeProxyVersion:v1.17.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[sonobuoy/systemd-logs@sha256:fadad24a66ddd544987c38811108a73d1a306dd3b5e3f090b207786f2825ffde sonobuoy/systemd-logs:v0.3],SizeBytes:297365055,},ContainerImage{Names:[192.168.1.252/library/etcd-amd64@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216 192.168.1.252/library/etcd-amd64:3.4.3-0],SizeBytes:288426917,},ContainerImage{Names:[192.168.1.252/library/kube-apiserver@sha256:4ee4113bcce32ae436b15364b09a2439dee15f6a19bbcf43a470d3dc879b0c62 192.168.1.252/library/kube-apiserver:v1.17.3],SizeBytes:170986003,},ContainerImage{Names:[192.168.1.252/library/kube-controller-manager@sha256:8f03391c0d22e3da8d22725178efe3c4338e1920504f7f4eb4a1c7f5f40c4c6e 192.168.1.252/library/kube-controller-manager:v1.17.3],SizeBytes:160918035,},ContainerImage{Names:[192.168.1.252/library/kube-proxy@sha256:8dfd672298f8fbcd37731e0f2e45be87199cee6ee1f482ffec766693dcec0ec4 192.168.1.252/library/kube-proxy:v1.17.3],SizeBytes:115964919,},ContainerImage{Names:[192.168.1.252/library/kube-scheduler@sha256:eea535b58da5b2fb66b32c61c6913d014cf85061a39527ad9bca2fa84b53dc1c 192.168.1.252/library/kube-scheduler:v1.17.3],SizeBytes:94435859,},ContainerImage{Names:[sonobuoy/sonobuoy@sha256:73f9cfe546ac6d0d5b94308293484c29c3b02fc341a975cfca97c80dd8728ed7 sonobuoy/sonobuoy:v0.17.2],SizeBytes:84339798,},ContainerImage{Names:[192.168.1.252/library/k8s-haproxy@sha256:f88bca67a2782e7bd88af168d5c11216c32104d12fe9240fac54a1d3196e3f9c 192.168.1.252/library/k8s-haproxy:2.0.0],SizeBytes:73550856,},ContainerImage{Names:[192.168.1.252/library/flannel@sha256:bd76b84c74ad70368a2341c2402841b75950df881388e43fc2aca000c546653a 192.168.1.252/library/flannel:v0.11.0-amd64],SizeBytes:52567296,},ContainerImage{Names:[192.168.1.252/library/coredns@sha256:608ac7ccba5ce41c6941fca13bc67059c1eef927fd968b554b790e21cc92543c 192.168.1.252/library/coredns:1.6.5],SizeBytes:41578211,},ContainerImage{Names:[192.168.1.252/library/pause@sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d 192.168.1.252/library/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} | |
Mar 6 02:40:20.607: INFO: | |
Logging kubelet events for node master02 | |
Mar 6 02:40:20.613: INFO: | |
Logging pods the kubelet thinks is on node master02 | |
Mar 6 02:40:20.629: INFO: sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-2zmwm started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded) | |
Mar 6 02:40:20.629: INFO: Container sonobuoy-worker ready: true, restart count 0 | |
Mar 6 02:40:20.629: INFO: Container systemd-logs ready: true, restart count 0 | |
Mar 6 02:40:20.629: INFO: kube-apiserver-master02 started at 2020-03-06 02:29:50 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 02:40:20.629: INFO: Container kube-apiserver ready: true, restart count 0 | |
Mar 6 02:40:20.629: INFO: kube-controller-manager-master02 started at 2020-03-06 02:29:50 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 02:40:20.629: INFO: Container kube-controller-manager ready: true, restart count 1 | |
Mar 6 02:40:20.629: INFO: kube-scheduler-master02 started at 2020-03-06 02:29:23 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 02:40:20.629: INFO: Container kube-scheduler ready: true, restart count 1 | |
Mar 6 02:40:20.629: INFO: kube-proxy-scdss started at 2020-03-06 02:30:00 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 02:40:20.629: INFO: Container kube-proxy ready: true, restart count 0 | |
Mar 6 02:40:20.629: INFO: kube-flannel-ds-amd64-vfl78 started at 2020-03-06 02:30:00 +0000 UTC (1+1 container statuses recorded) | |
Mar 6 02:40:20.629: INFO: Init container install-cni ready: true, restart count 0 | |
Mar 6 02:40:20.629: INFO: Container kube-flannel ready: true, restart count 0 | |
Mar 6 02:40:20.629: INFO: coredns-7795996659-phdkc started at 2020-03-06 02:30:13 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 02:40:20.629: INFO: Container coredns ready: true, restart count 0 | |
W0306 02:40:20.632029 19 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. | |
Mar 6 02:40:20.648: INFO: | |
Latency metrics for node master02 | |
Mar 6 02:40:20.648: INFO: | |
Logging node info for node master03 | |
Mar 6 02:40:20.650: INFO: Node Info: &Node{ObjectMeta:{master03 /api/v1/nodes/master03 c508ee4c-fe9d-4c73-a857-e57fba26fa86 2903 0 2020-03-06 02:29:17 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master03 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"4a:aa:08:ea:16:90"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:192.168.1.249 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{42140479488 0} {<nil>} 41152812Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3823226880 0} {<nil>} 3733620Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{37926431477 0} {<nil>} 37926431477 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3718369280 0} {<nil>} 3631220Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-03-06 02:38:51 +0000 UTC,LastTransitionTime:2020-03-06 02:29:17 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-03-06 02:38:51 +0000 UTC,LastTransitionTime:2020-03-06 02:29:17 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-03-06 02:38:51 +0000 UTC,LastTransitionTime:2020-03-06 02:29:17 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-03-06 02:38:51 +0000 UTC,LastTransitionTime:2020-03-06 02:30:10 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:192.168.1.249,},NodeAddress{Type:Hostname,Address:master03,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:20200220105402131453637367482142,SystemUUID:0C8F0A67-EB7E-42DE-9418-0973FE735A08,BootID:05b1fa23-e6be-4032-bc93-8800264dff91,KernelVersion:3.10.0-1062.12.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.5,KubeletVersion:v1.17.3,KubeProxyVersion:v1.17.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[sonobuoy/systemd-logs@sha256:fadad24a66ddd544987c38811108a73d1a306dd3b5e3f090b207786f2825ffde sonobuoy/systemd-logs:v0.3],SizeBytes:297365055,},ContainerImage{Names:[192.168.1.252/library/etcd-amd64@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216 192.168.1.252/library/etcd-amd64:3.4.3-0],SizeBytes:288426917,},ContainerImage{Names:[192.168.1.252/library/kube-apiserver@sha256:4ee4113bcce32ae436b15364b09a2439dee15f6a19bbcf43a470d3dc879b0c62 192.168.1.252/library/kube-apiserver:v1.17.3],SizeBytes:170986003,},ContainerImage{Names:[192.168.1.252/library/kube-controller-manager@sha256:8f03391c0d22e3da8d22725178efe3c4338e1920504f7f4eb4a1c7f5f40c4c6e 192.168.1.252/library/kube-controller-manager:v1.17.3],SizeBytes:160918035,},ContainerImage{Names:[192.168.1.252/library/dashboard@sha256:4e0d39dae7e089b77fe2bbcef648f89905716db9c1f0884950bfd42d9f446c29 192.168.1.252/library/dashboard:v2.0.0-rc5],SizeBytes:126359420,},ContainerImage{Names:[192.168.1.252/library/kube-proxy@sha256:8dfd672298f8fbcd37731e0f2e45be87199cee6ee1f482ffec766693dcec0ec4 192.168.1.252/library/kube-proxy:v1.17.3],SizeBytes:115964919,},ContainerImage{Names:[192.168.1.252/library/kube-scheduler@sha256:eea535b58da5b2fb66b32c61c6913d014cf85061a39527ad9bca2fa84b53dc1c 192.168.1.252/library/kube-scheduler:v1.17.3],SizeBytes:94435859,},ContainerImage{Names:[sonobuoy/sonobuoy@sha256:73f9cfe546ac6d0d5b94308293484c29c3b02fc341a975cfca97c80dd8728ed7 sonobuoy/sonobuoy:v0.17.2],SizeBytes:84339798,},ContainerImage{Names:[192.168.1.252/library/k8s-haproxy@sha256:f88bca67a2782e7bd88af168d5c11216c32104d12fe9240fac54a1d3196e3f9c 192.168.1.252/library/k8s-haproxy:2.0.0],SizeBytes:73550856,},ContainerImage{Names:[192.168.1.252/library/flannel@sha256:bd76b84c74ad70368a2341c2402841b75950df881388e43fc2aca000c546653a 192.168.1.252/library/flannel:v0.11.0-amd64],SizeBytes:52567296,},ContainerImage{Names:[192.168.1.252/library/coredns@sha256:608ac7ccba5ce41c6941fca13bc67059c1eef927fd968b554b790e21cc92543c 192.168.1.252/library/coredns:1.6.5],SizeBytes:41578211,},ContainerImage{Names:[192.168.1.252/library/metrics-scraper@sha256:e24a74b3b1cdc84d6285d507a12eb06907fd8c457b3e8ae9baa9418eca43efc4 192.168.1.252/library/metrics-scraper:v1.0.3],SizeBytes:40105664,},ContainerImage{Names:[192.168.1.252/library/pause@sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d 192.168.1.252/library/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} | |
Mar 6 02:40:20.650: INFO: | |
Logging kubelet events for node master03 | |
Mar 6 02:40:20.654: INFO: | |
Logging pods the kubelet thinks is on node master03 | |
Mar 6 02:40:20.664: INFO: coredns-7795996659-cmq4d started at 2020-03-06 02:30:13 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 02:40:20.664: INFO: Container coredns ready: true, restart count 0 | |
Mar 6 02:40:20.664: INFO: sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-w5psq started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded) | |
Mar 6 02:40:20.664: INFO: Container sonobuoy-worker ready: true, restart count 0 | |
Mar 6 02:40:20.664: INFO: Container systemd-logs ready: true, restart count 0 | |
Mar 6 02:40:20.664: INFO: kube-apiserver-master03 started at 2020-03-06 02:29:24 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 02:40:20.664: INFO: Container kube-apiserver ready: true, restart count 0 | |
Mar 6 02:40:20.664: INFO: kube-scheduler-master03 started at 2020-03-06 02:29:38 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 02:40:20.664: INFO: Container kube-scheduler ready: true, restart count 1 | |
Mar 6 02:40:20.664: INFO: kube-proxy-stbnn started at 2020-03-06 02:30:00 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 02:40:20.664: INFO: Container kube-proxy ready: true, restart count 0 | |
Mar 6 02:40:20.664: INFO: kubernetes-dashboard-6647798d59-j2ms4 started at 2020-03-06 02:30:10 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 02:40:20.664: INFO: Container kubernetes-dashboard ready: true, restart count 0 | |
Mar 6 02:40:20.664: INFO: kube-controller-manager-master03 started at 2020-03-06 02:29:50 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 02:40:20.664: INFO: Container kube-controller-manager ready: true, restart count 1 | |
Mar 6 02:40:20.664: INFO: kube-flannel-ds-amd64-hs69k started at 2020-03-06 02:30:00 +0000 UTC (1+1 container statuses recorded) | |
Mar 6 02:40:20.664: INFO: Init container install-cni ready: true, restart count 0 | |
Mar 6 02:40:20.664: INFO: Container kube-flannel ready: true, restart count 0 | |
Mar 6 02:40:20.664: INFO: dashboard-metrics-scraper-56568cb9d7-d57kl started at 2020-03-06 02:30:10 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 02:40:20.664: INFO: Container dashboard-metrics-scraper ready: true, restart count 0 | |
W0306 02:40:20.667076 19 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. | |
Mar 6 02:40:20.685: INFO: | |
Latency metrics for node master03 | |
Mar 6 02:40:20.685: INFO: | |
Logging node info for node worker01 | |
Mar 6 02:40:20.687: INFO: Node Info: &Node{ObjectMeta:{worker01 /api/v1/nodes/worker01 cf4203bb-1bfa-4b35-991f-935275b6bc46 3058 0 2020-03-06 02:30:30 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:worker01 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"5a:49:f5:5b:74:b3"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:192.168.1.250 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{42140479488 0} {<nil>} 41152812Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3823214592 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{37926431477 0} {<nil>} 37926431477 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3718356992 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-03-06 02:39:17 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-03-06 02:39:17 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-03-06 02:39:17 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-03-06 02:39:17 +0000 UTC,LastTransitionTime:2020-03-06 02:30:45 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:192.168.1.250,},NodeAddress{Type:Hostname,Address:worker01,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:20200220105402131453637367482142,SystemUUID:5CA364AA-0FF8-4B57-BA86-F28699575F0D,BootID:c85ad0c4-ebcf-4d01-97f0-a36c1cfc50be,KernelVersion:3.10.0-1062.12.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.5,KubeletVersion:v1.17.3,KubeProxyVersion:v1.17.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[sonobuoy/systemd-logs@sha256:fadad24a66ddd544987c38811108a73d1a306dd3b5e3f090b207786f2825ffde sonobuoy/systemd-logs:v0.3],SizeBytes:297365055,},ContainerImage{Names:[192.168.1.252/library/kube-proxy@sha256:8dfd672298f8fbcd37731e0f2e45be87199cee6ee1f482ffec766693dcec0ec4 192.168.1.252/library/kube-proxy:v1.17.3],SizeBytes:115964919,},ContainerImage{Names:[sonobuoy/sonobuoy@sha256:73f9cfe546ac6d0d5b94308293484c29c3b02fc341a975cfca97c80dd8728ed7 sonobuoy/sonobuoy:v0.17.2],SizeBytes:84339798,},ContainerImage{Names:[192.168.1.252/library/flannel@sha256:bd76b84c74ad70368a2341c2402841b75950df881388e43fc2aca000c546653a 192.168.1.252/library/flannel:v0.11.0-amd64],SizeBytes:52567296,},ContainerImage{Names:[192.168.1.252/library/metrics-server-amd64@sha256:c9c4e95068b51d6b33a9dccc61875df07dc650abbf4ac1a19d58b4628f89288b 192.168.1.252/library/metrics-server-amd64:v0.3.6],SizeBytes:39944451,},ContainerImage{Names:[192.168.1.252/library/kuard-amd64@sha256:bd17153e9a3319f401acc7a27759243f37d422c06cbbf01cb3e1f54bbbfe14f4 192.168.1.252/library/kuard-amd64:1],SizeBytes:19745911,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[192.168.1.252/library/pause@sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d 192.168.1.252/library/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} | |
Mar 6 02:40:20.687: INFO: | |
Logging kubelet events for node worker01 | |
Mar 6 02:40:20.691: INFO: | |
Logging pods the kubelet thinks is on node worker01 | |
Mar 6 02:40:20.703: INFO: kube-flannel-ds-amd64-xxhz9 started at 2020-03-06 02:30:30 +0000 UTC (1+1 container statuses recorded) | |
Mar 6 02:40:20.703: INFO: Init container install-cni ready: true, restart count 0 | |
Mar 6 02:40:20.703: INFO: Container kube-flannel ready: true, restart count 1 | |
Mar 6 02:40:20.703: INFO: contour-54748c65f5-jl5wz started at 2020-03-06 02:30:46 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 02:40:20.703: INFO: Container contour ready: false, restart count 0 | |
Mar 6 02:40:20.703: INFO: metrics-server-78799bf646-xrsnn started at 2020-03-06 02:30:51 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 02:40:20.703: INFO: Container metrics-server ready: true, restart count 0 | |
Mar 6 02:40:20.703: INFO: kuard-678c676f5d-vsn86 started at 2020-03-06 02:30:51 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 02:40:20.703: INFO: Container kuard ready: true, restart count 0 | |
Mar 6 02:40:20.703: INFO: contour-54748c65f5-gk5sz started at 2020-03-06 02:30:51 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 02:40:20.703: INFO: Container contour ready: false, restart count 0 | |
Mar 6 02:40:20.703: INFO: sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-2bz8g started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded) | |
Mar 6 02:40:20.703: INFO: Container sonobuoy-worker ready: true, restart count 0 | |
Mar 6 02:40:20.703: INFO: Container systemd-logs ready: true, restart count 0 | |
Mar 6 02:40:20.703: INFO: kube-proxy-kcb8f started at 2020-03-06 02:30:30 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 02:40:20.703: INFO: Container kube-proxy ready: true, restart count 0 | |
Mar 6 02:40:20.703: INFO: envoy-lvmcb started at 2020-03-06 02:30:45 +0000 UTC (1+1 container statuses recorded) | |
Mar 6 02:40:20.703: INFO: Init container envoy-initconfig ready: false, restart count 0 | |
Mar 6 02:40:20.703: INFO: Container envoy ready: false, restart count 0 | |
Mar 6 02:40:20.703: INFO: contour-certgen-82k46 started at 2020-03-06 02:30:46 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 02:40:20.703: INFO: Container contour ready: false, restart count 0 | |
Mar 6 02:40:20.703: INFO: kuard-678c676f5d-m29b6 started at 2020-03-06 02:30:49 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 02:40:20.703: INFO: Container kuard ready: true, restart count 0 | |
Mar 6 02:40:20.703: INFO: kuard-678c676f5d-tzsnn started at 2020-03-06 02:30:51 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 02:40:20.703: INFO: Container kuard ready: true, restart count 0 | |
W0306 02:40:20.707820 19 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. | |
Mar 6 02:40:20.727: INFO: | |
Latency metrics for node worker01 | |
Mar 6 02:40:20.727: INFO: | |
Logging node info for node worker02 | |
Mar 6 02:40:20.729: INFO: Node Info: &Node{ObjectMeta:{worker02 /api/v1/nodes/worker02 f0994ba1-7e4e-4cc8-b3c8-25d34b25d9ce 3056 0 2020-03-06 02:30:30 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:worker02 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"1a:75:0a:e8:cc:76"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:192.168.1.251 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{42140479488 0} {<nil>} 41152812Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3823214592 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{37926431477 0} {<nil>} 37926431477 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3718356992 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-03-06 02:39:16 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-03-06 02:39:16 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-03-06 02:39:16 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-03-06 02:39:16 +0000 UTC,LastTransitionTime:2020-03-06 02:30:55 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:192.168.1.251,},NodeAddress{Type:Hostname,Address:worker02,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:20200220105402131453637367482142,SystemUUID:EDBF7E33-228B-4233-93CF-7850B5A311E4,BootID:bd6a4f0f-5ddb-4585-83df-253b9292b617,KernelVersion:3.10.0-1062.12.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.5,KubeletVersion:v1.17.3,KubeProxyVersion:v1.17.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/google-containers/conformance@sha256:502434491cbc3fac5d9a606a879e554cf881b2ba5b688bed25f2c33d3ff1c777 gcr.io/google-containers/conformance:v1.17.3],SizeBytes:575831882,},ContainerImage{Names:[sonobuoy/systemd-logs@sha256:fadad24a66ddd544987c38811108a73d1a306dd3b5e3f090b207786f2825ffde sonobuoy/systemd-logs:v0.3],SizeBytes:297365055,},ContainerImage{Names:[192.168.1.252/library/kube-proxy@sha256:8dfd672298f8fbcd37731e0f2e45be87199cee6ee1f482ffec766693dcec0ec4 192.168.1.252/library/kube-proxy:v1.17.3],SizeBytes:115964919,},ContainerImage{Names:[sonobuoy/sonobuoy@sha256:73f9cfe546ac6d0d5b94308293484c29c3b02fc341a975cfca97c80dd8728ed7 sonobuoy/sonobuoy:v0.17.2],SizeBytes:84339798,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5 gcr.io/kubernetes-e2e-test-images/agnhost:2.8],SizeBytes:52800335,},ContainerImage{Names:[192.168.1.252/library/flannel@sha256:bd76b84c74ad70368a2341c2402841b75950df881388e43fc2aca000c546653a 192.168.1.252/library/flannel:v0.11.0-amd64],SizeBytes:52567296,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[192.168.1.252/library/pause@sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d 192.168.1.252/library/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} | |
Mar 6 02:40:20.729: INFO: | |
Logging kubelet events for node worker02 | |
Mar 6 02:40:20.733: INFO: | |
Logging pods the kubelet thinks is on node worker02 | |
Mar 6 02:40:20.737: INFO: kube-proxy-5xxdb started at 2020-03-06 02:30:30 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 02:40:20.737: INFO: Container kube-proxy ready: true, restart count 1 | |
Mar 6 02:40:20.737: INFO: envoy-wgz76 started at 2020-03-06 02:30:55 +0000 UTC (1+1 container statuses recorded) | |
Mar 6 02:40:20.737: INFO: Init container envoy-initconfig ready: false, restart count 0 | |
Mar 6 02:40:20.737: INFO: Container envoy ready: false, restart count 0 | |
Mar 6 02:40:20.737: INFO: sonobuoy started at 2020-03-06 02:38:02 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 02:40:20.737: INFO: Container kube-sonobuoy ready: true, restart count 0 | |
Mar 6 02:40:20.737: INFO: sonobuoy-e2e-job-67137ff64ac145d3 started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded) | |
Mar 6 02:40:20.737: INFO: Container e2e ready: true, restart count 0 | |
Mar 6 02:40:20.737: INFO: Container sonobuoy-worker ready: true, restart count 0 | |
Mar 6 02:40:20.737: INFO: sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-bpjtd started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded) | |
Mar 6 02:40:20.737: INFO: Container sonobuoy-worker ready: true, restart count 0 | |
Mar 6 02:40:20.737: INFO: Container systemd-logs ready: true, restart count 0 | |
Mar 6 02:40:20.737: INFO: sample-webhook-deployment-5f65f8c764-d2xcb started at 2020-03-06 02:39:37 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 02:40:20.737: INFO: Container sample-webhook ready: true, restart count 0 | |
Mar 6 02:40:20.737: INFO: kube-flannel-ds-amd64-ztfzf started at 2020-03-06 02:30:30 +0000 UTC (1+1 container statuses recorded) | |
Mar 6 02:40:20.737: INFO: Init container install-cni ready: true, restart count 0 | |
Mar 6 02:40:20.737: INFO: Container kube-flannel ready: true, restart count 0 | |
W0306 02:40:20.740228 19 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. | |
Mar 6 02:40:20.756: INFO: | |
Latency metrics for node worker02 | |
Mar 6 02:40:20.756: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
STEP: Destroying namespace "webhook-618" for this suite. | |
STEP: Destroying namespace "webhook-618-markers" for this suite. | |
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 | |
• Failure [44.182 seconds] | |
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 | |
should honor timeout [Conformance] [It] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
Mar 6 02:40:20.550: waiting for webhook configuration to be ready | |
Unexpected error: | |
<*errors.errorString | 0xc0000b3950>: { | |
s: "timed out waiting for the condition", | |
} | |
timed out waiting for the condition | |
occurred | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:2225 | |
------------------------------ | |
{"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":278,"completed":3,"skipped":22,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]"]} | |
SSSSS | |
------------------------------ | |
[sig-storage] EmptyDir volumes | |
pod should support shared volumes between containers [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
[BeforeEach] [sig-storage] EmptyDir volumes | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 | |
STEP: Creating a kubernetes client | |
Mar 6 02:40:20.837: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 | |
STEP: Building a namespace api object, basename emptydir | |
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-8781 | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] pod should support shared volumes between containers [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
STEP: Creating Pod | |
STEP: Waiting for the pod running | |
STEP: Geting the pod | |
STEP: Reading file content from the nginx-container | |
Mar 6 02:40:28.988: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-8781 PodName:pod-sharedvolume-12be2fd0-e4c9-4a25-a178-e5a4766a478e ContainerName:busybox-main-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} | |
Mar 6 02:40:28.988: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 | |
Mar 6 02:40:29.088: INFO: Exec stderr: "" | |
[AfterEach] [sig-storage] EmptyDir volumes | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 | |
Mar 6 02:40:29.088: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
STEP: Destroying namespace "emptydir-8781" for this suite. | |
• [SLOW TEST:8.259 seconds] | |
[sig-storage] EmptyDir volumes | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 | |
pod should support shared volumes between containers [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
------------------------------ | |
{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":278,"completed":4,"skipped":27,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]"]} | |
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS | |
------------------------------ | |
[k8s.io] Pods | |
should support retrieving logs from the container over websockets [NodeConformance] [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
[BeforeEach] [k8s.io] Pods | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 | |
STEP: Creating a kubernetes client | |
Mar 6 02:40:29.097: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 | |
STEP: Building a namespace api object, basename pods | |
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pods-1103 | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] [k8s.io] Pods | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 | |
[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
Mar 6 02:40:29.233: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 | |
STEP: creating the pod | |
STEP: submitting the pod to kubernetes | |
[AfterEach] [k8s.io] Pods | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 | |
Mar 6 02:40:31.261: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
STEP: Destroying namespace "pods-1103" for this suite. | |
•{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":278,"completed":5,"skipped":115,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]"]} | |
SSSSSSSSSSSSSSSSSSSSSSSSS | |
------------------------------ | |
[sig-storage] Projected downwardAPI | |
should provide podname only [NodeConformance] [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
[BeforeEach] [sig-storage] Projected downwardAPI | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 | |
STEP: Creating a kubernetes client | |
Mar 6 02:40:31.268: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 | |
STEP: Building a namespace api object, basename projected | |
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-1491 | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] [sig-storage] Projected downwardAPI | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 | |
[It] should provide podname only [NodeConformance] [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
STEP: Creating a pod to test downward API volume plugin | |
Mar 6 02:40:31.405: INFO: Waiting up to 5m0s for pod "downwardapi-volume-30b532f3-5ef9-493b-b591-e88207b8e556" in namespace "projected-1491" to be "success or failure" | |
Mar 6 02:40:31.408: INFO: Pod "downwardapi-volume-30b532f3-5ef9-493b-b591-e88207b8e556": Phase="Pending", Reason="", readiness=false. Elapsed: 2.882219ms | |
Mar 6 02:40:33.410: INFO: Pod "downwardapi-volume-30b532f3-5ef9-493b-b591-e88207b8e556": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005129711s | |
Mar 6 02:40:35.413: INFO: Pod "downwardapi-volume-30b532f3-5ef9-493b-b591-e88207b8e556": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.00833495s | |
STEP: Saw pod success | |
Mar 6 02:40:35.413: INFO: Pod "downwardapi-volume-30b532f3-5ef9-493b-b591-e88207b8e556" satisfied condition "success or failure" | |
Mar 6 02:40:35.417: INFO: Trying to get logs from node worker02 pod downwardapi-volume-30b532f3-5ef9-493b-b591-e88207b8e556 container client-container: <nil> | |
STEP: delete the pod | |
Mar 6 02:40:35.431: INFO: Waiting for pod downwardapi-volume-30b532f3-5ef9-493b-b591-e88207b8e556 to disappear | |
Mar 6 02:40:35.433: INFO: Pod downwardapi-volume-30b532f3-5ef9-493b-b591-e88207b8e556 no longer exists | |
[AfterEach] [sig-storage] Projected downwardAPI | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 | |
Mar 6 02:40:35.433: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
STEP: Destroying namespace "projected-1491" for this suite. | |
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":278,"completed":6,"skipped":140,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]"]} | |
SSSSSSSSSSS | |
------------------------------ | |
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases | |
should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
[BeforeEach] [k8s.io] Kubelet | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 | |
STEP: Creating a kubernetes client | |
Mar 6 02:40:35.440: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 | |
STEP: Building a namespace api object, basename kubelet-test | |
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubelet-test-8281 | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] [k8s.io] Kubelet | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 | |
[It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
[AfterEach] [k8s.io] Kubelet | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 | |
Mar 6 02:40:37.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
STEP: Destroying namespace "kubelet-test-8281" for this suite. | |
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":7,"skipped":151,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]"]} | |
SSS | |
------------------------------ | |
[sig-apps] Job | |
should delete a job [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
[BeforeEach] [sig-apps] Job | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 | |
STEP: Creating a kubernetes client | |
Mar 6 02:40:37.591: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 | |
STEP: Building a namespace api object, basename job | |
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in job-521 | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] should delete a job [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
STEP: Creating a job | |
STEP: Ensuring active pods == parallelism | |
STEP: delete a job | |
STEP: deleting Job.batch foo in namespace job-521, will wait for the garbage collector to delete the pods | |
Mar 6 02:40:39.795: INFO: Deleting Job.batch foo took: 4.924021ms | |
Mar 6 02:40:39.895: INFO: Terminating Job.batch foo pods took: 100.104439ms | |
STEP: Ensuring job was deleted | |
[AfterEach] [sig-apps] Job | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 | |
Mar 6 02:41:13.797: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
STEP: Destroying namespace "job-521" for this suite. | |
• [SLOW TEST:36.216 seconds] | |
[sig-apps] Job | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 | |
should delete a job [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
------------------------------ | |
{"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":278,"completed":8,"skipped":154,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]"]} | |
SSSSSSSS | |
------------------------------ | |
[k8s.io] Container Runtime blackbox test when starting a container that exits | |
should run with the expected status [NodeConformance] [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
[BeforeEach] [k8s.io] Container Runtime | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 | |
STEP: Creating a kubernetes client | |
Mar 6 02:41:13.808: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 | |
STEP: Building a namespace api object, basename container-runtime | |
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-runtime-3485 | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] should run with the expected status [NodeConformance] [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' | |
STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' | |
STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition | |
STEP: Container 'terminate-cmd-rpa': should get the expected 'State' | |
STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] | |
STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' | |
STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' | |
STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition | |
STEP: Container 'terminate-cmd-rpof': should get the expected 'State' | |
STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] | |
STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' | |
STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' | |
STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition | |
STEP: Container 'terminate-cmd-rpn': should get the expected 'State' | |
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] | |
[AfterEach] [k8s.io] Container Runtime | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 | |
Mar 6 02:41:35.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
STEP: Destroying namespace "container-runtime-3485" for this suite. | |
• [SLOW TEST:21.299 seconds] | |
[k8s.io] Container Runtime | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 | |
blackbox test | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 | |
when starting a container that exits | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39 | |
should run with the expected status [NodeConformance] [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
------------------------------ | |
{"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":278,"completed":9,"skipped":162,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]"]} | |
SSS | |
------------------------------ | |
[sig-network] Proxy version v1 | |
should proxy logs on node using proxy subresource [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
[BeforeEach] version v1 | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 | |
STEP: Creating a kubernetes client | |
Mar 6 02:41:35.106: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 | |
STEP: Building a namespace api object, basename proxy | |
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in proxy-5380 | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] should proxy logs on node using proxy subresource [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
Mar 6 02:41:35.248: INFO: (0) /api/v1/nodes/worker01/proxy/logs/: <pre> | |
<a href="anaconda/">anaconda/</a> | |
<a href="audit/">audit/</a> | |
<a href="boot.log">boot.log</... (200; 3.918437ms) | |
Mar 6 02:41:35.250: INFO: (1) /api/v1/nodes/worker01/proxy/logs/: <pre> | |
<a href="anaconda/">anaconda/</a> | |
<a href="audit/">audit/</a> | |
<a href="boot.log">boot.log</... (200; 2.167213ms) | |
Mar 6 02:41:35.253: INFO: (2) /api/v1/nodes/worker01/proxy/logs/: <pre> | |
<a href="anaconda/">anaconda/</a> | |
<a href="audit/">audit/</a> | |
<a href="boot.log">boot.log</... (200; 2.472236ms) | |
Mar 6 02:41:35.255: INFO: (3) /api/v1/nodes/worker01/proxy/logs/: <pre> | |
<a href="anaconda/">anaconda/</a> | |
<a href="audit/">audit/</a> | |
<a href="boot.log">boot.log</... (200; 2.290558ms) | |
Mar 6 02:41:35.257: INFO: (4) /api/v1/nodes/worker01/proxy/logs/: <pre> | |
<a href="anaconda/">anaconda/</a> | |
<a href="audit/">audit/</a> | |
<a href="boot.log">boot.log</... (200; 2.029142ms) | |
Mar 6 02:41:35.260: INFO: (5) /api/v1/nodes/worker01/proxy/logs/: <pre> | |
<a href="anaconda/">anaconda/</a> | |
<a href="audit/">audit/</a> | |
<a href="boot.log">boot.log</... (200; 2.496296ms) | |
Mar 6 02:41:35.262: INFO: (6) /api/v1/nodes/worker01/proxy/logs/: <pre> | |
<a href="anaconda/">anaconda/</a> | |
<a href="audit/">audit/</a> | |
<a href="boot.log">boot.log</... (200; 2.250325ms) | |
Mar 6 02:41:35.265: INFO: (7) /api/v1/nodes/worker01/proxy/logs/: <pre> | |
<a href="anaconda/">anaconda/</a> | |
<a href="audit/">audit/</a> | |
<a href="boot.log">boot.log</... (200; 2.521808ms) | |
Mar 6 02:41:35.267: INFO: (8) /api/v1/nodes/worker01/proxy/logs/: <pre> | |
<a href="anaconda/">anaconda/</a> | |
<a href="audit/">audit/</a> | |
<a href="boot.log">boot.log</... (200; 2.446168ms) | |
Mar 6 02:41:35.269: INFO: (9) /api/v1/nodes/worker01/proxy/logs/: <pre> | |
<a href="anaconda/">anaconda/</a> | |
<a href="audit/">audit/</a> | |
<a href="boot.log">boot.log</... (200; 2.403128ms) | |
Mar 6 02:41:35.272: INFO: (10) /api/v1/nodes/worker01/proxy/logs/: <pre> | |
<a href="anaconda/">anaconda/</a> | |
<a href="audit/">audit/</a> | |
<a href="boot.log">boot.log</... (200; 2.064471ms) | |
Mar 6 02:41:35.274: INFO: (11) /api/v1/nodes/worker01/proxy/logs/: <pre> | |
<a href="anaconda/">anaconda/</a> | |
<a href="audit/">audit/</a> | |
<a href="boot.log">boot.log</... (200; 2.294958ms) | |
Mar 6 02:41:35.276: INFO: (12) /api/v1/nodes/worker01/proxy/logs/: <pre> | |
<a href="anaconda/">anaconda/</a> | |
<a href="audit/">audit/</a> | |
<a href="boot.log">boot.log</... (200; 2.407591ms) | |
Mar 6 02:41:35.278: INFO: (13) /api/v1/nodes/worker01/proxy/logs/: <pre> | |
<a href="anaconda/">anaconda/</a> | |
<a href="audit/">audit/</a> | |
<a href="boot.log">boot.log</... (200; 2.161999ms) | |
Mar 6 02:41:35.281: INFO: (14) /api/v1/nodes/worker01/proxy/logs/: <pre> | |
<a href="anaconda/">anaconda/</a> | |
<a href="audit/">audit/</a> | |
<a href="boot.log">boot.log</... (200; 2.338496ms) | |
Mar 6 02:41:35.283: INFO: (15) /api/v1/nodes/worker01/proxy/logs/: <pre> | |
<a href="anaconda/">anaconda/</a> | |
<a href="audit/">audit/</a> | |
<a href="boot.log">boot.log</... (200; 2.201429ms) | |
Mar 6 02:41:35.285: INFO: (16) /api/v1/nodes/worker01/proxy/logs/: <pre> | |
<a href="anaconda/">anaconda/</a> | |
<a href="audit/">audit/</a> | |
<a href="boot.log">boot.log</... (200; 2.018162ms) | |
Mar 6 02:41:35.287: INFO: (17) /api/v1/nodes/worker01/proxy/logs/: <pre> | |
<a href="anaconda/">anaconda/</a> | |
<a href="audit/">audit/</a> | |
<a href="boot.log">boot.log</... (200; 2.39168ms) | |
Mar 6 02:41:35.290: INFO: (18) /api/v1/nodes/worker01/proxy/logs/: <pre> | |
<a href="anaconda/">anaconda/</a> | |
<a href="audit/">audit/</a> | |
<a href="boot.log">boot.log</... (200; 2.308516ms) | |
Mar 6 02:41:35.292: INFO: (19) /api/v1/nodes/worker01/proxy/logs/: <pre> | |
<a href="anaconda/">anaconda/</a> | |
<a href="audit/">audit/</a> | |
<a href="boot.log">boot.log</... (200; 1.945233ms) | |
[AfterEach] version v1 | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 | |
Mar 6 02:41:35.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
STEP: Destroying namespace "proxy-5380" for this suite. | |
•{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance]","total":278,"completed":10,"skipped":165,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]"]} | |
SSSSSSSSSSSS | |
------------------------------ | |
[sig-storage] Projected downwardAPI | |
should provide container's cpu limit [NodeConformance] [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
[BeforeEach] [sig-storage] Projected downwardAPI | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 | |
STEP: Creating a kubernetes client | |
Mar 6 02:41:35.298: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 | |
STEP: Building a namespace api object, basename projected | |
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-3421 | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] [sig-storage] Projected downwardAPI | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 | |
[It] should provide container's cpu limit [NodeConformance] [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
STEP: Creating a pod to test downward API volume plugin | |
Mar 6 02:41:35.437: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4a53e79d-5aa3-4060-bde4-99351516b034" in namespace "projected-3421" to be "success or failure" | |
Mar 6 02:41:35.440: INFO: Pod "downwardapi-volume-4a53e79d-5aa3-4060-bde4-99351516b034": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017076ms | |
Mar 6 02:41:37.449: INFO: Pod "downwardapi-volume-4a53e79d-5aa3-4060-bde4-99351516b034": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.011726889s | |
STEP: Saw pod success | |
Mar 6 02:41:37.449: INFO: Pod "downwardapi-volume-4a53e79d-5aa3-4060-bde4-99351516b034" satisfied condition "success or failure" | |
Mar 6 02:41:37.451: INFO: Trying to get logs from node worker02 pod downwardapi-volume-4a53e79d-5aa3-4060-bde4-99351516b034 container client-container: <nil> | |
STEP: delete the pod | |
Mar 6 02:41:37.466: INFO: Waiting for pod downwardapi-volume-4a53e79d-5aa3-4060-bde4-99351516b034 to disappear | |
Mar 6 02:41:37.468: INFO: Pod downwardapi-volume-4a53e79d-5aa3-4060-bde4-99351516b034 no longer exists | |
[AfterEach] [sig-storage] Projected downwardAPI | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 | |
Mar 6 02:41:37.468: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
STEP: Destroying namespace "projected-3421" for this suite. | |
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":11,"skipped":177,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]"]} | |
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS | |
------------------------------ | |
[sig-storage] Secrets | |
should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
[BeforeEach] [sig-storage] Secrets | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 | |
STEP: Creating a kubernetes client | |
Mar 6 02:41:37.476: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 | |
STEP: Building a namespace api object, basename secrets | |
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secrets-5796 | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
STEP: Creating secret with name secret-test-map-ba9c3339-ddd4-4172-bced-4deb82a408be | |
STEP: Creating a pod to test consume secrets | |
Mar 6 02:41:37.625: INFO: Waiting up to 5m0s for pod "pod-secrets-9a5b3e81-145f-442b-b423-e071bbe1d8ac" in namespace "secrets-5796" to be "success or failure" | |
Mar 6 02:41:37.628: INFO: Pod "pod-secrets-9a5b3e81-145f-442b-b423-e071bbe1d8ac": Phase="Pending", Reason="", readiness=false. Elapsed: 2.958754ms | |
Mar 6 02:41:39.630: INFO: Pod "pod-secrets-9a5b3e81-145f-442b-b423-e071bbe1d8ac": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005197416s | |
STEP: Saw pod success | |
Mar 6 02:41:39.630: INFO: Pod "pod-secrets-9a5b3e81-145f-442b-b423-e071bbe1d8ac" satisfied condition "success or failure" | |
Mar 6 02:41:39.633: INFO: Trying to get logs from node worker02 pod pod-secrets-9a5b3e81-145f-442b-b423-e071bbe1d8ac container secret-volume-test: <nil> | |
STEP: delete the pod | |
Mar 6 02:41:39.651: INFO: Waiting for pod pod-secrets-9a5b3e81-145f-442b-b423-e071bbe1d8ac to disappear | |
Mar 6 02:41:39.653: INFO: Pod pod-secrets-9a5b3e81-145f-442b-b423-e071bbe1d8ac no longer exists | |
[AfterEach] [sig-storage] Secrets | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 | |
Mar 6 02:41:39.653: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
STEP: Destroying namespace "secrets-5796" for this suite. | |
•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":12,"skipped":208,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]"]} | |
------------------------------ | |
[sig-storage] Subpath Atomic writer volumes | |
should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
[BeforeEach] [sig-storage] Subpath | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 | |
STEP: Creating a kubernetes client | |
Mar 6 02:41:39.660: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 | |
STEP: Building a namespace api object, basename subpath | |
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in subpath-4019 | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] Atomic writer volumes | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 | |
STEP: Setting up data | |
[It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
STEP: Creating pod pod-subpath-test-configmap-sjwb | |
STEP: Creating a pod to test atomic-volume-subpath | |
Mar 6 02:41:39.804: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-sjwb" in namespace "subpath-4019" to be "success or failure" | |
Mar 6 02:41:39.806: INFO: Pod "pod-subpath-test-configmap-sjwb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.087472ms | |
Mar 6 02:41:41.810: INFO: Pod "pod-subpath-test-configmap-sjwb": Phase="Running", Reason="", readiness=true. Elapsed: 2.006015461s | |
Mar 6 02:41:43.812: INFO: Pod "pod-subpath-test-configmap-sjwb": Phase="Running", Reason="", readiness=true. Elapsed: 4.00825221s | |
Mar 6 02:41:45.818: INFO: Pod "pod-subpath-test-configmap-sjwb": Phase="Running", Reason="", readiness=true. Elapsed: 6.014478492s | |
Mar 6 02:41:47.821: INFO: Pod "pod-subpath-test-configmap-sjwb": Phase="Running", Reason="", readiness=true. Elapsed: 8.017060425s | |
Mar 6 02:41:49.823: INFO: Pod "pod-subpath-test-configmap-sjwb": Phase="Running", Reason="", readiness=true. Elapsed: 10.01937165s | |
Mar 6 02:41:51.826: INFO: Pod "pod-subpath-test-configmap-sjwb": Phase="Running", Reason="", readiness=true. Elapsed: 12.022078842s | |
Mar 6 02:41:53.828: INFO: Pod "pod-subpath-test-configmap-sjwb": Phase="Running", Reason="", readiness=true. Elapsed: 14.024501637s | |
Mar 6 02:41:55.831: INFO: Pod "pod-subpath-test-configmap-sjwb": Phase="Running", Reason="", readiness=true. Elapsed: 16.027064662s | |
Mar 6 02:41:57.837: INFO: Pod "pod-subpath-test-configmap-sjwb": Phase="Running", Reason="", readiness=true. Elapsed: 18.032955461s | |
Mar 6 02:41:59.839: INFO: Pod "pod-subpath-test-configmap-sjwb": Phase="Running", Reason="", readiness=true. Elapsed: 20.035551681s | |
Mar 6 02:42:01.843: INFO: Pod "pod-subpath-test-configmap-sjwb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.038928148s | |
STEP: Saw pod success | |
Mar 6 02:42:01.843: INFO: Pod "pod-subpath-test-configmap-sjwb" satisfied condition "success or failure" | |
Mar 6 02:42:01.847: INFO: Trying to get logs from node worker02 pod pod-subpath-test-configmap-sjwb container test-container-subpath-configmap-sjwb: <nil> | |
STEP: delete the pod | |
Mar 6 02:42:01.859: INFO: Waiting for pod pod-subpath-test-configmap-sjwb to disappear | |
Mar 6 02:42:01.862: INFO: Pod pod-subpath-test-configmap-sjwb no longer exists | |
STEP: Deleting pod pod-subpath-test-configmap-sjwb | |
Mar 6 02:42:01.862: INFO: Deleting pod "pod-subpath-test-configmap-sjwb" in namespace "subpath-4019" | |
[AfterEach] [sig-storage] Subpath | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 | |
Mar 6 02:42:01.865: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
STEP: Destroying namespace "subpath-4019" for this suite. | |
• [SLOW TEST:22.211 seconds] | |
[sig-storage] Subpath | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 | |
Atomic writer volumes | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 | |
should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
------------------------------ | |
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":278,"completed":13,"skipped":208,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]"]} | |
SSSSSSSS | |
------------------------------ | |
[k8s.io] InitContainer [NodeConformance] | |
should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
[BeforeEach] [k8s.io] InitContainer [NodeConformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 | |
STEP: Creating a kubernetes client | |
Mar 6 02:42:01.871: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 | |
STEP: Building a namespace api object, basename init-container | |
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in init-container-2710 | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] [k8s.io] InitContainer [NodeConformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 | |
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
STEP: creating the pod | |
Mar 6 02:42:01.998: INFO: PodSpec: initContainers in spec.initContainers | |
[AfterEach] [k8s.io] InitContainer [NodeConformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 | |
Mar 6 02:42:04.637: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
STEP: Destroying namespace "init-container-2710" for this suite. | |
•{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":278,"completed":14,"skipped":216,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]"]} | |
S | |
------------------------------ | |
[k8s.io] Probing container | |
with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
[BeforeEach] [k8s.io] Probing container | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 | |
STEP: Creating a kubernetes client | |
Mar 6 02:42:04.645: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 | |
STEP: Building a namespace api object, basename container-probe | |
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-probe-5732 | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] [k8s.io] Probing container | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 | |
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
[AfterEach] [k8s.io] Probing container | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 | |
Mar 6 02:43:04.788: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
STEP: Destroying namespace "container-probe-5732" for this suite. | |
• [SLOW TEST:60.155 seconds] | |
[k8s.io] Probing container | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 | |
with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
------------------------------ | |
{"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":278,"completed":15,"skipped":217,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]"]} | |
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS | |
------------------------------ | |
[sig-storage] Projected secret | |
should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
[BeforeEach] [sig-storage] Projected secret | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 | |
STEP: Creating a kubernetes client | |
Mar 6 02:43:04.800: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 | |
STEP: Building a namespace api object, basename projected | |
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-3806 | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
STEP: Creating projection with secret that has name projected-secret-test-b5491ee0-8a7d-412f-83d5-89c566538d7e | |
STEP: Creating a pod to test consume secrets | |
Mar 6 02:43:04.949: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-4fa41554-cb10-4ffd-8968-1f369bbdbcaf" in namespace "projected-3806" to be "success or failure" | |
Mar 6 02:43:04.952: INFO: Pod "pod-projected-secrets-4fa41554-cb10-4ffd-8968-1f369bbdbcaf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.166774ms | |
Mar 6 02:43:06.954: INFO: Pod "pod-projected-secrets-4fa41554-cb10-4ffd-8968-1f369bbdbcaf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.004795253s | |
STEP: Saw pod success | |
Mar 6 02:43:06.954: INFO: Pod "pod-projected-secrets-4fa41554-cb10-4ffd-8968-1f369bbdbcaf" satisfied condition "success or failure" | |
Mar 6 02:43:06.957: INFO: Trying to get logs from node worker02 pod pod-projected-secrets-4fa41554-cb10-4ffd-8968-1f369bbdbcaf container projected-secret-volume-test: <nil> | |
STEP: delete the pod | |
Mar 6 02:43:06.971: INFO: Waiting for pod pod-projected-secrets-4fa41554-cb10-4ffd-8968-1f369bbdbcaf to disappear | |
Mar 6 02:43:06.973: INFO: Pod pod-projected-secrets-4fa41554-cb10-4ffd-8968-1f369bbdbcaf no longer exists | |
[AfterEach] [sig-storage] Projected secret | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 | |
Mar 6 02:43:06.973: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
STEP: Destroying namespace "projected-3806" for this suite. | |
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":16,"skipped":251,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]"]} | |
SSSSSSSSSSSSSSSSSSSS | |
------------------------------ | |
[sig-api-machinery] Watchers | |
should observe an object deletion if it stops meeting the requirements of the selector [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
[BeforeEach] [sig-api-machinery] Watchers | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 | |
STEP: Creating a kubernetes client | |
Mar 6 02:43:06.982: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 | |
STEP: Building a namespace api object, basename watch | |
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in watch-7589 | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
STEP: creating a watch on configmaps with a certain label | |
STEP: creating a new configmap | |
STEP: modifying the configmap once | |
STEP: changing the label value of the configmap | |
STEP: Expecting to observe a delete notification for the watched object | |
Mar 6 02:43:07.137: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-7589 /api/v1/namespaces/watch-7589/configmaps/e2e-watch-test-label-changed 1c0bab8e-4585-405d-b0d1-f04d7fe5e21b 4255 0 2020-03-06 02:43:07 +0000 UTC <nil> <nil> map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} | |
Mar 6 02:43:07.137: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-7589 /api/v1/namespaces/watch-7589/configmaps/e2e-watch-test-label-changed 1c0bab8e-4585-405d-b0d1-f04d7fe5e21b 4256 0 2020-03-06 02:43:07 +0000 UTC <nil> <nil> map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} | |
Mar 6 02:43:07.137: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-7589 /api/v1/namespaces/watch-7589/configmaps/e2e-watch-test-label-changed 1c0bab8e-4585-405d-b0d1-f04d7fe5e21b 4257 0 2020-03-06 02:43:07 +0000 UTC <nil> <nil> map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} | |
STEP: modifying the configmap a second time | |
STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements | |
STEP: changing the label value of the configmap back | |
STEP: modifying the configmap a third time | |
STEP: deleting the configmap | |
STEP: Expecting to observe an add notification for the watched object when the label value was restored | |
Mar 6 02:43:17.157: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-7589 /api/v1/namespaces/watch-7589/configmaps/e2e-watch-test-label-changed 1c0bab8e-4585-405d-b0d1-f04d7fe5e21b 4313 0 2020-03-06 02:43:07 +0000 UTC <nil> <nil> map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} | |
Mar 6 02:43:17.157: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-7589 /api/v1/namespaces/watch-7589/configmaps/e2e-watch-test-label-changed 1c0bab8e-4585-405d-b0d1-f04d7fe5e21b 4314 0 2020-03-06 02:43:07 +0000 UTC <nil> <nil> map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} | |
Mar 6 02:43:17.157: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-7589 /api/v1/namespaces/watch-7589/configmaps/e2e-watch-test-label-changed 1c0bab8e-4585-405d-b0d1-f04d7fe5e21b 4315 0 2020-03-06 02:43:07 +0000 UTC <nil> <nil> map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} | |
[AfterEach] [sig-api-machinery] Watchers | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 | |
Mar 6 02:43:17.157: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
STEP: Destroying namespace "watch-7589" for this suite. | |
• [SLOW TEST:10.181 seconds] | |
[sig-api-machinery] Watchers | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 | |
should observe an object deletion if it stops meeting the requirements of the selector [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
------------------------------ | |
{"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":278,"completed":17,"skipped":271,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]"]} | |
SSSSSSSSSSS | |
------------------------------ | |
[sig-network] Proxy version v1 | |
should proxy through a service and a pod [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
[BeforeEach] version v1 | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 | |
STEP: Creating a kubernetes client | |
Mar 6 02:43:17.163: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 | |
STEP: Building a namespace api object, basename proxy | |
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in proxy-7131 | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] should proxy through a service and a pod [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
STEP: starting an echo server on multiple ports | |
STEP: creating replication controller proxy-service-jw2qq in namespace proxy-7131 | |
I0306 02:43:17.317911 19 runners.go:189] Created replication controller with name: proxy-service-jw2qq, namespace: proxy-7131, replica count: 1 | |
I0306 02:43:18.368189 19 runners.go:189] proxy-service-jw2qq Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady | |
I0306 02:43:19.368340 19 runners.go:189] proxy-service-jw2qq Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady | |
I0306 02:43:20.368469 19 runners.go:189] proxy-service-jw2qq Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady | |
I0306 02:43:21.368603 19 runners.go:189] proxy-service-jw2qq Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady | |
I0306 02:43:22.368727 19 runners.go:189] proxy-service-jw2qq Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady | |
I0306 02:43:23.368849 19 runners.go:189] proxy-service-jw2qq Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady | |
I0306 02:43:24.368993 19 runners.go:189] proxy-service-jw2qq Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady | |
I0306 02:43:25.369135 19 runners.go:189] proxy-service-jw2qq Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady | |
I0306 02:43:26.369259 19 runners.go:189] proxy-service-jw2qq Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady | |
I0306 02:43:27.369402 19 runners.go:189] proxy-service-jw2qq Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady | |
Mar 6 02:43:27.372: INFO: setup took 10.079385495s, starting test cases | |
STEP: running 16 cases, 20 attempts per case, 320 total attempts | |
Mar 6 02:43:27.383: INFO: (0) /api/v1/namespaces/proxy-7131/services/proxy-service-jw2qq:portname1/proxy/: foo (200; 10.75768ms) | |
Mar 6 02:43:27.383: INFO: (0) /api/v1/namespaces/proxy-7131/pods/http:proxy-service-jw2qq-p6vtp:160/proxy/: foo (200; 10.638706ms) | |
Mar 6 02:43:27.384: INFO: (0) /api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp:162/proxy/: bar (200; 11.383987ms) | |
Mar 6 02:43:27.384: INFO: (0) /api/v1/namespaces/proxy-7131/pods/https:proxy-service-jw2qq-p6vtp:460/proxy/: tls baz (200; 11.956412ms) | |
Mar 6 02:43:27.384: INFO: (0) /api/v1/namespaces/proxy-7131/pods/http:proxy-service-jw2qq-p6vtp:162/proxy/: bar (200; 12.29239ms) | |
Mar 6 02:43:27.384: INFO: (0) /api/v1/namespaces/proxy-7131/pods/https:proxy-service-jw2qq-p6vtp:443/proxy/: <a href="/api/v1/namespaces/proxy-7131/pods/https:proxy-service-jw2qq-p6vtp:443/proxy/tlsrewritem... (200; 12.415047ms) | |
Mar 6 02:43:27.384: INFO: (0) /api/v1/namespaces/proxy-7131/services/proxy-service-jw2qq:portname2/proxy/: bar (200; 12.243791ms) | |
Mar 6 02:43:27.384: INFO: (0) /api/v1/namespaces/proxy-7131/pods/https:proxy-service-jw2qq-p6vtp:462/proxy/: tls qux (200; 12.800259ms) | |
Mar 6 02:43:27.384: INFO: (0) /api/v1/namespaces/proxy-7131/services/https:proxy-service-jw2qq:tlsportname2/proxy/: tls qux (200; 12.707195ms) | |
Mar 6 02:43:27.401: INFO: (0) /api/v1/namespaces/proxy-7131/services/http:proxy-service-jw2qq:portname1/proxy/: foo (200; 28.624883ms) | |
Mar 6 02:43:27.402: INFO: (0) /api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp/proxy/: <a href="/api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp/proxy/rewriteme">test</a> (200; 30.010982ms) | |
Mar 6 02:43:27.402: INFO: (0) /api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp:160/proxy/: foo (200; 29.562596ms) | |
Mar 6 02:43:27.402: INFO: (0) /api/v1/namespaces/proxy-7131/services/http:proxy-service-jw2qq:portname2/proxy/: bar (200; 29.677109ms) | |
Mar 6 02:43:27.402: INFO: (0) /api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp:1080/proxy/: <a href="/api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp:1080/proxy/rewriteme">test<... (200; 30.114429ms) | |
Mar 6 02:43:27.402: INFO: (0) /api/v1/namespaces/proxy-7131/pods/http:proxy-service-jw2qq-p6vtp:1080/proxy/: <a href="/api/v1/namespaces/proxy-7131/pods/http:proxy-service-jw2qq-p6vtp:1080/proxy/rewriteme">... (200; 29.799644ms) | |
Mar 6 02:43:27.404: INFO: (0) /api/v1/namespaces/proxy-7131/services/https:proxy-service-jw2qq:tlsportname1/proxy/: tls baz (200; 31.632081ms) | |
Mar 6 02:43:27.408: INFO: (1) /api/v1/namespaces/proxy-7131/pods/http:proxy-service-jw2qq-p6vtp:162/proxy/: bar (200; 3.661707ms) | |
Mar 6 02:43:27.410: INFO: (1) /api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp:160/proxy/: foo (200; 5.448258ms) | |
Mar 6 02:43:27.410: INFO: (1) /api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp/proxy/: <a href="/api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp/proxy/rewriteme">test</a> (200; 6.079197ms) | |
Mar 6 02:43:27.410: INFO: (1) /api/v1/namespaces/proxy-7131/services/https:proxy-service-jw2qq:tlsportname2/proxy/: tls qux (200; 6.153358ms) | |
Mar 6 02:43:27.410: INFO: (1) /api/v1/namespaces/proxy-7131/pods/https:proxy-service-jw2qq-p6vtp:460/proxy/: tls baz (200; 5.414763ms) | |
Mar 6 02:43:27.410: INFO: (1) /api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp:1080/proxy/: <a href="/api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp:1080/proxy/rewriteme">test<... (200; 5.943711ms) | |
Mar 6 02:43:27.410: INFO: (1) /api/v1/namespaces/proxy-7131/pods/http:proxy-service-jw2qq-p6vtp:160/proxy/: foo (200; 6.112722ms) | |
Mar 6 02:43:27.411: INFO: (1) /api/v1/namespaces/proxy-7131/services/proxy-service-jw2qq:portname1/proxy/: foo (200; 6.46139ms) | |
Mar 6 02:43:27.411: INFO: (1) /api/v1/namespaces/proxy-7131/services/http:proxy-service-jw2qq:portname2/proxy/: bar (200; 6.228202ms) | |
Mar 6 02:43:27.414: INFO: (1) /api/v1/namespaces/proxy-7131/services/http:proxy-service-jw2qq:portname1/proxy/: foo (200; 9.514525ms) | |
Mar 6 02:43:27.421: INFO: (1) /api/v1/namespaces/proxy-7131/pods/https:proxy-service-jw2qq-p6vtp:443/proxy/: <a href="/api/v1/namespaces/proxy-7131/pods/https:proxy-service-jw2qq-p6vtp:443/proxy/tlsrewritem... (200; 17.140581ms) | |
Mar 6 02:43:27.421: INFO: (1) /api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp:162/proxy/: bar (200; 16.730584ms) | |
Mar 6 02:43:27.422: INFO: (1) /api/v1/namespaces/proxy-7131/pods/http:proxy-service-jw2qq-p6vtp:1080/proxy/: <a href="/api/v1/namespaces/proxy-7131/pods/http:proxy-service-jw2qq-p6vtp:1080/proxy/rewriteme">... (200; 16.919811ms) | |
Mar 6 02:43:27.422: INFO: (1) /api/v1/namespaces/proxy-7131/pods/https:proxy-service-jw2qq-p6vtp:462/proxy/: tls qux (200; 17.695105ms) | |
Mar 6 02:43:27.422: INFO: (1) /api/v1/namespaces/proxy-7131/services/proxy-service-jw2qq:portname2/proxy/: bar (200; 17.082024ms) | |
Mar 6 02:43:27.423: INFO: (1) /api/v1/namespaces/proxy-7131/services/https:proxy-service-jw2qq:tlsportname1/proxy/: tls baz (200; 17.987299ms) | |
Mar 6 02:43:27.425: INFO: (2) /api/v1/namespaces/proxy-7131/pods/https:proxy-service-jw2qq-p6vtp:443/proxy/: <a href="/api/v1/namespaces/proxy-7131/pods/https:proxy-service-jw2qq-p6vtp:443/proxy/tlsrewritem... (200; 2.804425ms) | |
Mar 6 02:43:27.426: INFO: (2) /api/v1/namespaces/proxy-7131/pods/http:proxy-service-jw2qq-p6vtp:160/proxy/: foo (200; 2.878715ms) | |
Mar 6 02:43:27.428: INFO: (2) /api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp:162/proxy/: bar (200; 4.692323ms) | |
Mar 6 02:43:27.428: INFO: (2) /api/v1/namespaces/proxy-7131/pods/https:proxy-service-jw2qq-p6vtp:460/proxy/: tls baz (200; 4.510096ms) | |
Mar 6 02:43:27.430: INFO: (2) /api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp/proxy/: <a href="/api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp/proxy/rewriteme">test</a> (200; 6.894675ms) | |
Mar 6 02:43:27.430: INFO: (2) /api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp:160/proxy/: foo (200; 7.447179ms) | |
Mar 6 02:43:27.430: INFO: (2) /api/v1/namespaces/proxy-7131/services/http:proxy-service-jw2qq:portname1/proxy/: foo (200; 7.319094ms) | |
Mar 6 02:43:27.430: INFO: (2) /api/v1/namespaces/proxy-7131/services/https:proxy-service-jw2qq:tlsportname1/proxy/: tls baz (200; 7.19389ms) | |
Mar 6 02:43:27.430: INFO: (2) /api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp:1080/proxy/: <a href="/api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp:1080/proxy/rewriteme">test<... (200; 6.921901ms) | |
Mar 6 02:43:27.430: INFO: (2) /api/v1/namespaces/proxy-7131/pods/http:proxy-service-jw2qq-p6vtp:162/proxy/: bar (200; 7.572049ms) | |
Mar 6 02:43:27.431: INFO: (2) /api/v1/namespaces/proxy-7131/services/proxy-service-jw2qq:portname2/proxy/: bar (200; 7.50989ms) | |
Mar 6 02:43:27.431: INFO: (2) /api/v1/namespaces/proxy-7131/services/https:proxy-service-jw2qq:tlsportname2/proxy/: tls qux (200; 7.264885ms) | |
Mar 6 02:43:27.431: INFO: (2) /api/v1/namespaces/proxy-7131/pods/http:proxy-service-jw2qq-p6vtp:1080/proxy/: <a href="/api/v1/namespaces/proxy-7131/pods/http:proxy-service-jw2qq-p6vtp:1080/proxy/rewriteme">... (200; 7.801208ms) | |
Mar 6 02:43:27.431: INFO: (2) /api/v1/namespaces/proxy-7131/pods/https:proxy-service-jw2qq-p6vtp:462/proxy/: tls qux (200; 7.342208ms) | |
Mar 6 02:43:27.432: INFO: (2) /api/v1/namespaces/proxy-7131/services/proxy-service-jw2qq:portname1/proxy/: foo (200; 8.237935ms) | |
Mar 6 02:43:27.433: INFO: (2) /api/v1/namespaces/proxy-7131/services/http:proxy-service-jw2qq:portname2/proxy/: bar (200; 9.481511ms) | |
Mar 6 02:43:27.437: INFO: (3) /api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp:1080/proxy/: <a href="/api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp:1080/proxy/rewriteme">test<... (200; 4.023759ms) | |
Mar 6 02:43:27.439: INFO: (3) /api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp:162/proxy/: bar (200; 5.598598ms) | |
Mar 6 02:43:27.439: INFO: (3) /api/v1/namespaces/proxy-7131/pods/http:proxy-service-jw2qq-p6vtp:160/proxy/: foo (200; 6.268182ms) | |
Mar 6 02:43:27.439: INFO: (3) /api/v1/namespaces/proxy-7131/pods/http:proxy-service-jw2qq-p6vtp:1080/proxy/: <a href="/api/v1/namespaces/proxy-7131/pods/http:proxy-service-jw2qq-p6vtp:1080/proxy/rewriteme">... (200; 6.138413ms) | |
Mar 6 02:43:27.439: INFO: (3) /api/v1/namespaces/proxy-7131/services/https:proxy-service-jw2qq:tlsportname2/proxy/: tls qux (200; 6.923995ms) | |
Mar 6 02:43:27.440: INFO: (3) /api/v1/namespaces/proxy-7131/pods/https:proxy-service-jw2qq-p6vtp:443/proxy/: <a href="/api/v1/namespaces/proxy-7131/pods/https:proxy-service-jw2qq-p6vtp:443/proxy/tlsrewritem... (200; 6.985255ms) | |
Mar 6 02:43:27.441: INFO: (3) /api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp:160/proxy/: foo (200; 8.235308ms) | |
Mar 6 02:43:27.441: INFO: (3) /api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp/proxy/: <a href="/api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp/proxy/rewriteme">test</a> (200; 7.926913ms) | |
Mar 6 02:43:27.441: INFO: (3) /api/v1/namespaces/proxy-7131/pods/https:proxy-service-jw2qq-p6vtp:460/proxy/: tls baz (200; 7.978309ms) | |
Mar 6 02:43:27.441: INFO: (3) /api/v1/namespaces/proxy-7131/pods/http:proxy-service-jw2qq-p6vtp:162/proxy/: bar (200; 8.144636ms) | |
Mar 6 02:43:27.441: INFO: (3) /api/v1/namespaces/proxy-7131/services/proxy-service-jw2qq:portname1/proxy/: foo (200; 8.594394ms) | |
Mar 6 02:43:27.441: INFO: (3) /api/v1/namespaces/proxy-7131/services/proxy-service-jw2qq:portname2/proxy/: bar (200; 8.48441ms) | |
Mar 6 02:43:27.441: INFO: (3) /api/v1/namespaces/proxy-7131/pods/https:proxy-service-jw2qq-p6vtp:462/proxy/: tls qux (200; 8.063273ms) | |
Mar 6 02:43:27.441: INFO: (3) /api/v1/namespaces/proxy-7131/services/https:proxy-service-jw2qq:tlsportname1/proxy/: tls baz (200; 8.398984ms) | |
Mar 6 02:43:27.442: INFO: (3) /api/v1/namespaces/proxy-7131/services/http:proxy-service-jw2qq:portname2/proxy/: bar (200; 8.807832ms) | |
Mar 6 02:43:27.442: INFO: (3) /api/v1/namespaces/proxy-7131/services/http:proxy-service-jw2qq:portname1/proxy/: foo (200; 8.637109ms) | |
Mar 6 02:43:27.446: INFO: (4) /api/v1/namespaces/proxy-7131/pods/https:proxy-service-jw2qq-p6vtp:443/proxy/: <a href="/api/v1/namespaces/proxy-7131/pods/https:proxy-service-jw2qq-p6vtp:443/proxy/tlsrewritem... (200; 3.252831ms) | |
Mar 6 02:43:27.446: INFO: (4) /api/v1/namespaces/proxy-7131/pods/http:proxy-service-jw2qq-p6vtp:160/proxy/: foo (200; 3.61071ms) | |
Mar 6 02:43:27.446: INFO: (4) /api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp:1080/proxy/: <a href="/api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp:1080/proxy/rewriteme">test<... (200; 4.162762ms) | |
Mar 6 02:43:27.447: INFO: (4) /api/v1/namespaces/proxy-7131/services/proxy-service-jw2qq:portname1/proxy/: foo (200; 4.431647ms) | |
Mar 6 02:43:27.449: INFO: (4) /api/v1/namespaces/proxy-7131/services/proxy-service-jw2qq:portname2/proxy/: bar (200; 6.720341ms) | |
Mar 6 02:43:27.449: INFO: (4) /api/v1/namespaces/proxy-7131/pods/https:proxy-service-jw2qq-p6vtp:462/proxy/: tls qux (200; 7.332812ms) | |
Mar 6 02:43:27.450: INFO: (4) /api/v1/namespaces/proxy-7131/services/http:proxy-service-jw2qq:portname2/proxy/: bar (200; 7.360394ms) | |
Mar 6 02:43:27.451: INFO: (4) /api/v1/namespaces/proxy-7131/pods/http:proxy-service-jw2qq-p6vtp:1080/proxy/: <a href="/api/v1/namespaces/proxy-7131/pods/http:proxy-service-jw2qq-p6vtp:1080/proxy/rewriteme">... (200; 8.469276ms) | |
Mar 6 02:43:27.451: INFO: (4) /api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp/proxy/: <a href="/api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp/proxy/rewriteme">test</a> (200; 9.043979ms) | |
Mar 6 02:43:27.452: INFO: (4) /api/v1/namespaces/proxy-7131/services/http:proxy-service-jw2qq:portname1/proxy/: foo (200; 10.055199ms) | |
Mar 6 02:43:27.452: INFO: (4) /api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp:160/proxy/: foo (200; 9.776144ms) | |
Mar 6 02:43:27.452: INFO: (4) /api/v1/namespaces/proxy-7131/services/https:proxy-service-jw2qq:tlsportname2/proxy/: tls qux (200; 10.460658ms) | |
Mar 6 02:43:27.452: INFO: (4) /api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp:162/proxy/: bar (200; 9.760661ms) | |
Mar 6 02:43:27.453: INFO: (4) /api/v1/namespaces/proxy-7131/pods/https:proxy-service-jw2qq-p6vtp:460/proxy/: tls baz (200; 9.926909ms) | |
Mar 6 02:43:27.453: INFO: (4) /api/v1/namespaces/proxy-7131/pods/http:proxy-service-jw2qq-p6vtp:162/proxy/: bar (200; 10.062458ms) | |
Mar 6 02:43:27.454: INFO: (4) /api/v1/namespaces/proxy-7131/services/https:proxy-service-jw2qq:tlsportname1/proxy/: tls baz (200; 11.209613ms) | |
Mar 6 02:43:27.458: INFO: (5) /api/v1/namespaces/proxy-7131/pods/https:proxy-service-jw2qq-p6vtp:443/proxy/: <a href="/api/v1/namespaces/proxy-7131/pods/https:proxy-service-jw2qq-p6vtp:443/proxy/tlsrewritem... (200; 3.68871ms) | |
Mar 6 02:43:27.458: INFO: (5) /api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp:162/proxy/: bar (200; 3.604912ms) | |
Mar 6 02:43:27.458: INFO: (5) /api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp:1080/proxy/: <a href="/api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp:1080/proxy/rewriteme">test<... (200; 3.776935ms) | |
Mar 6 02:43:27.459: INFO: (5) /api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp/proxy/: <a href="/api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp/proxy/rewriteme">test</a> (200; 5.475411ms) | |
Mar 6 02:43:27.459: INFO: (5) /api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp:160/proxy/: foo (200; 4.795473ms) | |
Mar 6 02:43:27.459: INFO: (5) /api/v1/namespaces/proxy-7131/pods/https:proxy-service-jw2qq-p6vtp:462/proxy/: tls qux (200; 4.696641ms) | |
Mar 6 02:43:27.459: INFO: (5) /api/v1/namespaces/proxy-7131/services/proxy-service-jw2qq:portname1/proxy/: foo (200; 5.441649ms) | |
Mar 6 02:43:27.459: INFO: (5) /api/v1/namespaces/proxy-7131/pods/http:proxy-service-jw2qq-p6vtp:1080/proxy/: <a href="/api/v1/namespaces/proxy-7131/pods/http:proxy-service-jw2qq-p6vtp:1080/proxy/rewriteme">... (200; 4.892401ms) | |
Mar 6 02:43:27.460: INFO: (5) /api/v1/namespaces/proxy-7131/services/https:proxy-service-jw2qq:tlsportname1/proxy/: tls baz (200; 5.158743ms) | |
Mar 6 02:43:27.460: INFO: (5) /api/v1/namespaces/proxy-7131/services/http:proxy-service-jw2qq:portname2/proxy/: bar (200; 5.463738ms) | |
Mar 6 02:43:27.463: INFO: (5) /api/v1/namespaces/proxy-7131/services/proxy-service-jw2qq:portname2/proxy/: bar (200; 8.281617ms) | |
Mar 6 02:43:27.463: INFO: (5) /api/v1/namespaces/proxy-7131/pods/http:proxy-service-jw2qq-p6vtp:160/proxy/: foo (200; 8.521791ms) | |
Mar 6 02:43:27.463: INFO: (5) /api/v1/namespaces/proxy-7131/pods/http:proxy-service-jw2qq-p6vtp:162/proxy/: bar (200; 8.473404ms) | |
Mar 6 02:43:27.463: INFO: (5) /api/v1/namespaces/proxy-7131/pods/https:proxy-service-jw2qq-p6vtp:460/proxy/: tls baz (200; 8.145207ms) | |
Mar 6 02:43:27.463: INFO: (5) /api/v1/namespaces/proxy-7131/services/http:proxy-service-jw2qq:portname1/proxy/: foo (200; 8.499329ms) | |
Mar 6 02:43:27.463: INFO: (5) /api/v1/namespaces/proxy-7131/services/https:proxy-service-jw2qq:tlsportname2/proxy/: tls qux (200; 8.064096ms) | |
Mar 6 02:43:27.465: INFO: (6) /api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp/proxy/: <a href="/api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp/proxy/rewriteme">test</a> (200; 2.005048ms) | |
Mar 6 02:43:27.466: INFO: (6) /api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp:1080/proxy/: <a href="/api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp:1080/proxy/rewriteme">test<... (200; 2.993281ms) | |
Mar 6 02:43:27.466: INFO: (6) /api/v1/namespaces/proxy-7131/pods/https:proxy-service-jw2qq-p6vtp:443/proxy/: <a href="/api/v1/namespaces/proxy-7131/pods/https:proxy-service-jw2qq-p6vtp:443/proxy/tlsrewritem... (200; 2.938958ms) | |
Mar 6 02:43:27.469: INFO: (6) /api/v1/namespaces/proxy-7131/services/http:proxy-service-jw2qq:portname2/proxy/: bar (200; 5.70615ms) | |
Mar 6 02:43:27.469: INFO: (6) /api/v1/namespaces/proxy-7131/pods/https:proxy-service-jw2qq-p6vtp:462/proxy/: tls qux (200; 5.331757ms) | |
Mar 6 02:43:27.469: INFO: (6) /api/v1/namespaces/proxy-7131/services/https:proxy-service-jw2qq:tlsportname2/proxy/: tls qux (200; 6.095839ms) | |
Mar 6 02:43:27.469: INFO: (6) /api/v1/namespaces/proxy-7131/pods/http:proxy-service-jw2qq-p6vtp:1080/proxy/: <a href="/api/v1/namespaces/proxy-7131/pods/http:proxy-service-jw2qq-p6vtp:1080/proxy/rewriteme">... (200; 5.603291ms) | |
Mar 6 02:43:27.469: INFO: (6) /api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp:162/proxy/: bar (200; 5.508694ms) | |
Mar 6 02:43:27.471: INFO: (6) /api/v1/namespaces/proxy-7131/services/https:proxy-service-jw2qq:tlsportname1/proxy/: tls baz (200; 7.508064ms) | |
Mar 6 02:43:27.471: INFO: (6) /api/v1/namespaces/proxy-7131/services/proxy-service-jw2qq:portname2/proxy/: bar (200; 7.684856ms) | |
Mar 6 02:43:27.471: INFO: (6) /api/v1/namespaces/proxy-7131/pods/https:proxy-service-jw2qq-p6vtp:460/proxy/: tls baz (200; 8.132945ms) | |
Mar 6 02:43:27.471: INFO: (6) /api/v1/namespaces/proxy-7131/pods/http:proxy-service-jw2qq-p6vtp:160/proxy/: foo (200; 7.762546ms) | |
Mar 6 02:43:27.471: INFO: (6) /api/v1/namespaces/proxy-7131/pods/http:proxy-service-jw2qq-p6vtp:162/proxy/: bar (200; 7.490046ms) | |
Mar 6 02:43:27.471: INFO: (6) /api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp:160/proxy/: foo (200; 7.617804ms) | |
Mar 6 02:43:27.475: INFO: (6) /api/v1/namespaces/proxy-7131/services/http:proxy-service-jw2qq:portname1/proxy/: foo (200; 10.944438ms) | |
Mar 6 02:43:27.475: INFO: (6) /api/v1/namespaces/proxy-7131/services/proxy-service-jw2qq:portname1/proxy/: foo (200; 11.482834ms) | |
Mar 6 02:43:27.479: INFO: (7) /api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp/proxy/: <a href="/api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp/proxy/rewriteme">test</a> (200; 4.385047ms) | |
Mar 6 02:43:27.481: INFO: (7) /api/v1/namespaces/proxy-7131/pods/https:proxy-service-jw2qq-p6vtp:460/proxy/: tls baz (200; 6.143238ms) | |
Mar 6 02:43:27.481: INFO: (7) /api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp:1080/proxy/: <a href="/api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp:1080/proxy/rewriteme">test<... (200; 6.001582ms) | |
Mar 6 02:43:27.481: INFO: (7) /api/v1/namespaces/proxy-7131/pods/http:proxy-service-jw2qq-p6vtp:160/proxy/: foo (200; 5.870819ms) | |
Mar 6 02:43:27.481: INFO: (7) /api/v1/namespaces/proxy-7131/pods/http:proxy-service-jw2qq-p6vtp:162/proxy/: bar (200; 5.510248ms) | |
Mar 6 02:43:27.481: INFO: (7) /api/v1/namespaces/proxy-7131/pods/https:proxy-service-jw2qq-p6vtp:443/proxy/: <a href="/api/v1/namespaces/proxy-7131/pods/https:proxy-service-jw2qq-p6vtp:443/proxy/tlsrewritem... (200; 6.020065ms) | |
Mar 6 02:43:27.481: INFO: (7) /api/v1/namespaces/proxy-7131/pods/http:proxy-service-jw2qq-p6vtp:1080/proxy/: <a href="/api/v1/namespaces/proxy-7131/pods/http:proxy-service-jw2qq-p6vtp:1080/proxy/rewriteme">... (200; 5.719785ms) | |
Mar 6 02:43:27.481: INFO: (7) /api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp:162/proxy/: bar (200; 5.958845ms) | |
Mar 6 02:43:27.482: INFO: (7) /api/v1/namespaces/proxy-7131/services/proxy-service-jw2qq:portname1/proxy/: foo (200; 7.348539ms) | |
Mar 6 02:43:27.482: INFO: (7) /api/v1/namespaces/proxy-7131/services/http:proxy-service-jw2qq:portname2/proxy/: bar (200; 7.236095ms) | |
Mar 6 02:43:27.482: INFO: (7) /api/v1/namespaces/proxy-7131/services/proxy-service-jw2qq:portname2/proxy/: bar (200; 7.320716ms) | |
Mar 6 02:43:27.483: INFO: (7) /api/v1/namespaces/proxy-7131/services/https:proxy-service-jw2qq:tlsportname1/proxy/: tls baz (200; 7.980397ms) | |
Mar 6 02:43:27.484: INFO: (7) /api/v1/namespaces/proxy-7131/pods/https:proxy-service-jw2qq-p6vtp:462/proxy/: tls qux (200; 8.102288ms) | |
Mar 6 02:43:27.484: INFO: (7) /api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp:160/proxy/: foo (200; 8.346771ms) | |
Mar 6 02:43:27.484: INFO: (7) /api/v1/namespaces/proxy-7131/services/https:proxy-service-jw2qq:tlsportname2/proxy/: tls qux (200; 9.084973ms) | |
Mar 6 02:43:27.484: INFO: (7) /api/v1/namespaces/proxy-7131/services/http:proxy-service-jw2qq:portname1/proxy/: foo (200; 9.113267ms) | |
Mar 6 02:43:27.487: INFO: (8) /api/v1/namespaces/proxy-7131/pods/http:proxy-service-jw2qq-p6vtp:160/proxy/: foo (200; 2.684431ms) | |
Mar 6 02:43:27.487: INFO: (8) /api/v1/namespaces/proxy-7131/pods/http:proxy-service-jw2qq-p6vtp:1080/proxy/: <a href="/api/v1/namespaces/proxy-7131/pods/http:proxy-service-jw2qq-p6vtp:1080/proxy/rewriteme">... (200; 2.620382ms) | |
Mar 6 02:43:27.488: INFO: (8) /api/v1/namespaces/proxy-7131/pods/https:proxy-service-jw2qq-p6vtp:462/proxy/: tls qux (200; 2.904889ms) | |
Mar 6 02:43:27.488: INFO: (8) /api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp:162/proxy/: bar (200; 3.244387ms) | |
Mar 6 02:43:27.489: INFO: (8) /api/v1/namespaces/proxy-7131/services/proxy-service-jw2qq:portname1/proxy/: foo (200; 4.033585ms) | |
Mar 6 02:43:27.489: INFO: (8) /api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp/proxy/: <a href="/api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp/proxy/rewriteme">test</a> (200; 3.737147ms) | |
Mar 6 02:43:27.490: INFO: (8) /api/v1/namespaces/proxy-7131/services/https:proxy-service-jw2qq:tlsportname1/proxy/: tls baz (200; 5.36037ms) | |
Mar 6 02:43:27.490: INFO: (8) /api/v1/namespaces/proxy-7131/pods/https:proxy-service-jw2qq-p6vtp:460/proxy/: tls baz (200; 5.145491ms) | |
Mar 6 02:43:27.491: INFO: (8) /api/v1/namespaces/proxy-7131/services/http:proxy-service-jw2qq:portname1/proxy/: foo (200; 5.788993ms) | |
Mar 6 02:43:27.491: INFO: (8) /api/v1/namespaces/proxy-7131/pods/https:proxy-service-jw2qq-p6vtp:443/proxy/: <a href="/api/v1/namespaces/proxy-7131/pods/https:proxy-service-jw2qq-p6vtp:443/proxy/tlsrewritem... (200; 5.604366ms) | |
Mar 6 02:43:27.493: INFO: (8) /api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp:160/proxy/: foo (200; 7.717065ms) | |
Mar 6 02:43:27.493: INFO: (8) /api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp:1080/proxy/: <a href="/api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp:1080/proxy/rewriteme">test<... (200; 7.415157ms) | |
Mar 6 02:43:27.493: INFO: (8) /api/v1/namespaces/proxy-7131/services/proxy-service-jw2qq:portname2/proxy/: bar (200; 8.558449ms) | |
Mar 6 02:43:27.493: INFO: (8) /api/v1/namespaces/proxy-7131/pods/http:proxy-service-jw2qq-p6vtp:162/proxy/: bar (200; 8.473915ms) | |
Mar 6 02:43:27.494: INFO: (8) /api/v1/namespaces/proxy-7131/services/http:proxy-service-jw2qq:portname2/proxy/: bar (200; 9.1008ms) | |
Mar 6 02:43:27.495: INFO: (8) /api/v1/namespaces/proxy-7131/services/https:proxy-service-jw2qq:tlsportname2/proxy/: tls qux (200; 9.511101ms) | |
Mar 6 02:43:27.501: INFO: (9) /api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp:1080/proxy/: <a href="/api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp:1080/proxy/rewriteme">test<... (200; 6.264267ms) | |
Mar 6 02:43:27.502: INFO: (9) /api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp:162/proxy/: bar (200; 6.303027ms) | |
Mar 6 02:43:27.502: INFO: (9) /api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp:160/proxy/: foo (200; 6.379904ms) | |
Mar 6 02:43:27.502: INFO: (9) /api/v1/namespaces/proxy-7131/pods/http:proxy-service-jw2qq-p6vtp:1080/proxy/: <a href="/api/v1/namespaces/proxy-7131/pods/http:proxy-service-jw2qq-p6vtp:1080/proxy/rewriteme">... (200; 6.458649ms) | |
Mar 6 02:43:27.502: INFO: (9) /api/v1/namespaces/proxy-7131/pods/http:proxy-service-jw2qq-p6vtp:162/proxy/: bar (200; 6.780195ms) | |
Mar 6 02:43:27.502: INFO: (9) /api/v1/namespaces/proxy-7131/pods/http:proxy-service-jw2qq-p6vtp:160/proxy/: foo (200; 7.103348ms) | |
Mar 6 02:43:27.502: INFO: (9) /api/v1/namespaces/proxy-7131/pods/https:proxy-service-jw2qq-p6vtp:443/proxy/: <a href="/api/v1/namespaces/proxy-7131/pods/https:proxy-service-jw2qq-p6vtp:443/proxy/tlsrewritem... (200; 7.260978ms) | |
Mar 6 02:43:27.502: INFO: (9) /api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp/proxy/: <a href="/api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp/proxy/rewriteme">test</a> (200; 7.588887ms) | |
Mar 6 02:43:27.504: INFO: (9) /api/v1/namespaces/proxy-7131/pods/https:proxy-service-jw2qq-p6vtp:462/proxy/: tls qux (200; 8.615904ms) | |
Mar 6 02:43:27.505: INFO: (9) /api/v1/namespaces/proxy-7131/services/http:proxy-service-jw2qq:portname1/proxy/: foo (200; 9.326655ms) | |
Mar 6 02:43:27.505: INFO: (9) /api/v1/namespaces/proxy-7131/services/proxy-service-jw2qq:portname2/proxy/: bar (200; 9.769996ms) | |
Mar 6 02:43:27.505: INFO: (9) /api/v1/namespaces/proxy-7131/pods/https:proxy-service-jw2qq-p6vtp:460/proxy/: tls baz (200; 9.740679ms) | |
Mar 6 02:43:27.505: INFO: (9) /api/v1/namespaces/proxy-7131/services/https:proxy-service-jw2qq:tlsportname1/proxy/: tls baz (200; 9.857813ms) | |
Mar 6 02:43:27.509: INFO: (9) /api/v1/namespaces/proxy-7131/services/proxy-service-jw2qq:portname1/proxy/: foo (200; 13.645418ms) | |
Mar 6 02:43:27.509: INFO: (9) /api/v1/namespaces/proxy-7131/services/http:proxy-service-jw2qq:portname2/proxy/: bar (200; 13.037259ms) | |
Mar 6 02:43:27.509: INFO: (9) /api/v1/namespaces/proxy-7131/services/https:proxy-service-jw2qq:tlsportname2/proxy/: tls qux (200; 13.612125ms) | |
Mar 6 02:43:27.515: INFO: (10) /api/v1/namespaces/proxy-7131/pods/http:proxy-service-jw2qq-p6vtp:1080/proxy/: <a href="/api/v1/namespaces/proxy-7131/pods/http:proxy-service-jw2qq-p6vtp:1080/proxy/rewriteme">... (200; 5.594066ms) | |
Mar 6 02:43:27.516: INFO: (10) /api/v1/namespaces/proxy-7131/pods/http:proxy-service-jw2qq-p6vtp:162/proxy/: bar (200; 5.922942ms) | |
Mar 6 02:43:27.516: INFO: (10) /api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp:1080/proxy/: <a href="/api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp:1080/proxy/rewriteme">test<... (200; 6.597625ms) | |
Mar 6 02:43:27.516: INFO: (10) /api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp:160/proxy/: foo (200; 6.372118ms) | |
Mar 6 02:43:27.517: INFO: (10) /api/v1/namespaces/proxy-7131/pods/https:proxy-service-jw2qq-p6vtp:443/proxy/: <a href="/api/v1/namespaces/proxy-7131/pods/https:proxy-service-jw2qq-p6vtp:443/proxy/tlsrewritem... (200; 7.415117ms) | |
Mar 6 02:43:27.517: INFO: (10) /api/v1/namespaces/proxy-7131/services/http:proxy-service-jw2qq:portname1/proxy/: foo (200; 6.904516ms) | |
Mar 6 02:43:27.517: INFO: (10) /api/v1/namespaces/proxy-7131/pods/https:proxy-service-jw2qq-p6vtp:460/proxy/: tls baz (200; 6.821143ms) | |
Mar 6 02:43:27.519: INFO: (10) /api/v1/namespaces/proxy-7131/services/https:proxy-service-jw2qq:tlsportname1/proxy/: tls baz (200; 8.959867ms) | |
Mar 6 02:43:27.521: INFO: (10) /api/v1/namespaces/proxy-7131/services/proxy-service-jw2qq:portname2/proxy/: bar (200; 10.655855ms) | |
Mar 6 02:43:27.523: INFO: (10) /api/v1/namespaces/proxy-7131/pods/http:proxy-service-jw2qq-p6vtp:160/proxy/: foo (200; 12.959487ms) | |
Mar 6 02:43:27.523: INFO: (10) /api/v1/namespaces/proxy-7131/pods/https:proxy-service-jw2qq-p6vtp:462/proxy/: tls qux (200; 12.725132ms) | |
Mar 6 02:43:27.523: INFO: (10) /api/v1/namespaces/proxy-7131/services/proxy-service-jw2qq:portname1/proxy/: foo (200; 13.389517ms) | |
Mar 6 02:43:27.523: INFO: (10) /api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp/proxy/: <a href="/api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp/proxy/rewriteme">test</a> (200; 13.584456ms) | |
Mar 6 02:43:27.523: INFO: (10) /api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp:162/proxy/: bar (200; 13.11182ms) | |
Mar 6 02:43:27.523: INFO: (10) /api/v1/namespaces/proxy-7131/services/http:proxy-service-jw2qq:portname2/proxy/: bar (200; 13.447437ms) | |
Mar 6 02:43:27.524: INFO: (10) /api/v1/namespaces/proxy-7131/services/https:proxy-service-jw2qq:tlsportname2/proxy/: tls qux (200; 13.964067ms) | |
Mar 6 02:43:27.527: INFO: (11) /api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp/proxy/: <a href="/api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp/proxy/rewriteme">test</a> (200; 2.953805ms) | |
Mar 6 02:43:27.527: INFO: (11) /api/v1/namespaces/proxy-7131/pods/https:proxy-service-jw2qq-p6vtp:460/proxy/: tls baz (200; 3.116933ms) | |
Mar 6 02:43:27.528: INFO: (11) /api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp:1080/proxy/: <a href="/api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp:1080/proxy/rewriteme">test<... (200; 3.03086ms) | |
Mar 6 02:43:27.528: INFO: (11) /api/v1/namespaces/proxy-7131/pods/http:proxy-service-jw2qq-p6vtp:162/proxy/: bar (200; 3.091431ms) | |
Mar 6 02:43:27.530: INFO: (11) /api/v1/namespaces/proxy-7131/services/proxy-service-jw2qq:portname1/proxy/: foo (200; 5.712985ms) | |
Mar 6 02:43:27.530: INFO: (11) /api/v1/namespaces/proxy-7131/pods/http:proxy-service-jw2qq-p6vtp:160/proxy/: foo (200; 5.67844ms) | |
Mar 6 02:43:27.531: INFO: (11) /api/v1/namespaces/proxy-7131/services/https:proxy-service-jw2qq:tlsportname1/proxy/: tls baz (200; 6.084747ms) | |
Mar 6 02:43:27.531: INFO: (11) /api/v1/namespaces/proxy-7131/services/proxy-service-jw2qq:portname2/proxy/: bar (200; 6.338116ms) | |
Mar 6 02:43:27.531: INFO: (11) /api/v1/namespaces/proxy-7131/pods/https:proxy-service-jw2qq-p6vtp:462/proxy/: tls qux (200; 6.087171ms) | |
Mar 6 02:43:27.532: INFO: (11) /api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp:160/proxy/: foo (200; 6.811328ms) | |
Mar 6 02:43:27.532: INFO: (11) /api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp:162/proxy/: bar (200; 7.20696ms) | |
Mar 6 02:43:27.532: INFO: (11) /api/v1/namespaces/proxy-7131/pods/https:proxy-service-jw2qq-p6vtp:443/proxy/: <a href="/api/v1/namespaces/proxy-7131/pods/https:proxy-service-jw2qq-p6vtp:443/proxy/tlsrewritem... (200; 7.751781ms) | |
Mar 6 02:43:27.533: INFO: (11) /api/v1/namespaces/proxy-7131/pods/http:proxy-service-jw2qq-p6vtp:1080/proxy/: <a href="/api/v1/namespaces/proxy-7131/pods/http:proxy-service-jw2qq-p6vtp:1080/proxy/rewriteme">... (200; 7.658941ms) | |
Mar 6 02:43:27.533: INFO: (11) /api/v1/namespaces/proxy-7131/services/https:proxy-service-jw2qq:tlsportname2/proxy/: tls qux (200; 8.395955ms) | |
Mar 6 02:43:27.535: INFO: (11) /api/v1/namespaces/proxy-7131/services/http:proxy-service-jw2qq:portname1/proxy/: foo (200; 9.356815ms) | |
Mar 6 02:43:27.535: INFO: (11) /api/v1/namespaces/proxy-7131/services/http:proxy-service-jw2qq:portname2/proxy/: bar (200; 9.685836ms) | |
Mar 6 02:43:27.538: INFO: (12) /api/v1/namespaces/proxy-7131/pods/http:proxy-service-jw2qq-p6vtp:160/proxy/: foo (200; 2.983573ms) | |
Mar 6 02:43:27.539: INFO: (12) /api/v1/namespaces/proxy-7131/pods/http:proxy-service-jw2qq-p6vtp:1080/proxy/: <a href="/api/v1/namespaces/proxy-7131/pods/http:proxy-service-jw2qq-p6vtp:1080/proxy/rewriteme">... (200; 4.460149ms) | |
Mar 6 02:43:27.539: INFO: (12) /api/v1/namespaces/proxy-7131/pods/https:proxy-service-jw2qq-p6vtp:443/proxy/: <a href="/api/v1/namespaces/proxy-7131/pods/https:proxy-service-jw2qq-p6vtp:443/proxy/tlsrewritem... (200; 4.181911ms) | |
Mar 6 02:43:27.541: INFO: (12) /api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp/proxy/: <a href="/api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp/proxy/rewriteme">test</a> (200; 5.508753ms) | |
Mar 6 02:43:27.541: INFO: (12) /api/v1/namespaces/proxy-7131/pods/https:proxy-service-jw2qq-p6vtp:462/proxy/: tls qux (200; 5.616069ms) | |
Mar 6 02:43:27.541: INFO: (12) /api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp:1080/proxy/: <a href="/api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp:1080/proxy/rewriteme">test<... (200; 5.457181ms) | |
Mar 6 02:43:27.541: INFO: (12) /api/v1/namespaces/proxy-7131/services/proxy-service-jw2qq:portname2/proxy/: bar (200; 5.917445ms) | |
Mar 6 02:43:27.541: INFO: (12) /api/v1/namespaces/proxy-7131/services/https:proxy-service-jw2qq:tlsportname1/proxy/: tls baz (200; 5.837701ms) | |
Mar 6 02:43:27.541: INFO: (12) /api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp:162/proxy/: bar (200; 5.741329ms) | |
Mar 6 02:43:27.543: INFO: (12) /api/v1/namespaces/proxy-7131/services/http:proxy-service-jw2qq:portname1/proxy/: foo (200; 8.39232ms) | |
Mar 6 02:43:27.543: INFO: (12) /api/v1/namespaces/proxy-7131/pods/http:proxy-service-jw2qq-p6vtp:162/proxy/: bar (200; 8.479814ms) | |
Mar 6 02:43:27.543: INFO: (12) /api/v1/namespaces/proxy-7131/services/proxy-service-jw2qq:portname1/proxy/: foo (200; 7.9654ms) | |
Mar 6 02:43:27.543: INFO: (12) /api/v1/namespaces/proxy-7131/services/http:proxy-service-jw2qq:portname2/proxy/: bar (200; 8.343281ms) | |
Mar 6 02:43:27.543: INFO: (12) /api/v1/namespaces/proxy-7131/pods/https:proxy-service-jw2qq-p6vtp:460/proxy/: tls baz (200; 8.229582ms) | |
Mar 6 02:43:27.543: INFO: (12) /api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp:160/proxy/: foo (200; 8.309626ms) | |
Mar 6 02:43:27.543: INFO: (12) /api/v1/namespaces/proxy-7131/services/https:proxy-service-jw2qq:tlsportname2/proxy/: tls qux (200; 8.203791ms) | |
Mar 6 02:43:27.549: INFO: (13) /api/v1/namespaces/proxy-7131/services/proxy-service-jw2qq:portname2/proxy/: bar (200; 5.369043ms) | |
Mar 6 02:43:27.550: INFO: (13) /api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp:162/proxy/: bar (200; 6.366454ms) | |
Mar 6 02:43:27.550: INFO: (13) /api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp/proxy/: <a href="/api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp/proxy/rewriteme">test</a> (200; 6.132908ms) | |
Mar 6 02:43:27.550: INFO: (13) /api/v1/namespaces/proxy-7131/pods/https:proxy-service-jw2qq-p6vtp:443/proxy/: <a href="/api/v1/namespaces/proxy-7131/pods/https:proxy-service-jw2qq-p6vtp:443/proxy/tlsrewritem... (200; 5.990406ms) | |
Mar 6 02:43:27.550: INFO: (13) /api/v1/namespaces/proxy-7131/pods/http:proxy-service-jw2qq-p6vtp:162/proxy/: bar (200; 6.370379ms) | |
Mar 6 02:43:27.550: INFO: (13) /api/v1/namespaces/proxy-7131/pods/https:proxy-service-jw2qq-p6vtp:462/proxy/: tls qux (200; 6.272088ms) | |
Mar 6 02:43:27.550: INFO: (13) /api/v1/namespaces/proxy-7131/pods/https:proxy-service-jw2qq-p6vtp:460/proxy/: tls baz (200; 6.254625ms) | |
Mar 6 02:43:27.551: INFO: (13) /api/v1/namespaces/proxy-7131/services/proxy-service-jw2qq:portname1/proxy/: foo (200; 6.767285ms) | |
Mar 6 02:43:27.551: INFO: (13) /api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp:160/proxy/: foo (200; 7.310735ms) | |
Mar 6 02:43:27.551: INFO: (13) /api/v1/namespaces/proxy-7131/services/https:proxy-service-jw2qq:tlsportname1/proxy/: tls baz (200; 7.491888ms) | |
Mar 6 02:43:27.552: INFO: (13) /api/v1/namespaces/proxy-7131/pods/http:proxy-service-jw2qq-p6vtp:160/proxy/: foo (200; 8.702343ms) | |
Mar 6 02:43:27.552: INFO: (13) /api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp:1080/proxy/: <a href="/api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp:1080/proxy/rewriteme">test<... (200; 8.039559ms) | |
Mar 6 02:43:27.552: INFO: (13) /api/v1/namespaces/proxy-7131/services/http:proxy-service-jw2qq:portname1/proxy/: foo (200; 8.36915ms) | |
Mar 6 02:43:27.552: INFO: (13) /api/v1/namespaces/proxy-7131/pods/http:proxy-service-jw2qq-p6vtp:1080/proxy/: <a href="/api/v1/namespaces/proxy-7131/pods/http:proxy-service-jw2qq-p6vtp:1080/proxy/rewriteme">... (200; 8.794062ms) | |
Mar 6 02:43:27.552: INFO: (13) /api/v1/namespaces/proxy-7131/services/http:proxy-service-jw2qq:portname2/proxy/: bar (200; 8.951988ms) | |
Mar 6 02:43:27.553: INFO: (13) /api/v1/namespaces/proxy-7131/services/https:proxy-service-jw2qq:tlsportname2/proxy/: tls qux (200; 9.43591ms) | |
Mar 6 02:43:27.556: INFO: (14) /api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp/proxy/: <a href="/api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp/proxy/rewriteme">test</a> (200; 2.751228ms) | |
Mar 6 02:43:27.557: INFO: (14) /api/v1/namespaces/proxy-7131/services/https:proxy-service-jw2qq:tlsportname2/proxy/: tls qux (200; 3.268472ms) | |
Mar 6 02:43:27.557: INFO: (14) /api/v1/namespaces/proxy-7131/pods/https:proxy-service-jw2qq-p6vtp:462/proxy/: tls qux (200; 3.456627ms) | |
Mar 6 02:43:27.557: INFO: (14) /api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp:162/proxy/: bar (200; 3.541233ms) | |
Mar 6 02:43:27.559: INFO: (14) /api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp:160/proxy/: foo (200; 4.68479ms) | |
Mar 6 02:43:27.559: INFO: (14) /api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp:1080/proxy/: <a href="/api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp:1080/proxy/rewriteme">test<... (200; 4.872894ms) | |
Mar 6 02:43:27.559: INFO: (14) /api/v1/namespaces/proxy-7131/pods/https:proxy-service-jw2qq-p6vtp:460/proxy/: tls baz (200; 4.362031ms) | |
Mar 6 02:43:27.559: INFO: (14) /api/v1/namespaces/proxy-7131/pods/http:proxy-service-jw2qq-p6vtp:1080/proxy/: <a href="/api/v1/namespaces/proxy-7131/pods/http:proxy-service-jw2qq-p6vtp:1080/proxy/rewriteme">... (200; 4.66183ms) | |
Mar 6 02:43:27.560: INFO: (14) /api/v1/namespaces/proxy-7131/services/http:proxy-service-jw2qq:portname2/proxy/: bar (200; 5.313539ms) | |
Mar 6 02:43:27.560: INFO: (14) /api/v1/namespaces/proxy-7131/services/http:proxy-service-jw2qq:portname1/proxy/: foo (200; 5.722675ms) | |
Mar 6 02:43:27.562: INFO: (14) /api/v1/namespaces/proxy-7131/services/proxy-service-jw2qq:portname1/proxy/: foo (200; 8.180861ms) | |
Mar 6 02:43:27.562: INFO: (14) /api/v1/namespaces/proxy-7131/pods/http:proxy-service-jw2qq-p6vtp:160/proxy/: foo (200; 8.064862ms) | |
Mar 6 02:43:27.563: INFO: (14) /api/v1/namespaces/proxy-7131/pods/https:proxy-service-jw2qq-p6vtp:443/proxy/: <a href="/api/v1/namespaces/proxy-7131/pods/https:proxy-service-jw2qq-p6vtp:443/proxy/tlsrewritem... (200; 8.782487ms) | |
Mar 6 02:43:27.563: INFO: (14) /api/v1/namespaces/proxy-7131/services/proxy-service-jw2qq:portname2/proxy/: bar (200; 8.453657ms) | |
Mar 6 02:43:27.563: INFO: (14) /api/v1/namespaces/proxy-7131/pods/http:proxy-service-jw2qq-p6vtp:162/proxy/: bar (200; 8.592192ms) | |
Mar 6 02:43:27.563: INFO: (14) /api/v1/namespaces/proxy-7131/services/https:proxy-service-jw2qq:tlsportname1/proxy/: tls baz (200; 8.765907ms) | |
Mar 6 02:43:27.566: INFO: (15) /api/v1/namespaces/proxy-7131/pods/http:proxy-service-jw2qq-p6vtp:160/proxy/: foo (200; 2.811717ms) | |
Mar 6 02:43:27.567: INFO: (15) /api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp:162/proxy/: bar (200; 3.271617ms) | |
Mar 6 02:43:27.567: INFO: (15) /api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp:160/proxy/: foo (200; 3.37345ms) | |
Mar 6 02:43:27.567: INFO: (15) /api/v1/namespaces/proxy-7131/pods/https:proxy-service-jw2qq-p6vtp:460/proxy/: tls baz (200; 3.235147ms) | |
Mar 6 02:43:27.567: INFO: (15) /api/v1/namespaces/proxy-7131/pods/http:proxy-service-jw2qq-p6vtp:162/proxy/: bar (200; 3.357332ms) | |
Mar 6 02:43:27.567: INFO: (15) /api/v1/namespaces/proxy-7131/pods/https:proxy-service-jw2qq-p6vtp:443/proxy/: <a href="/api/v1/namespaces/proxy-7131/pods/https:proxy-service-jw2qq-p6vtp:443/proxy/tlsrewritem... (200; 3.370846ms) | |
Mar 6 02:43:27.569: INFO: (15) /api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp:1080/proxy/: <a href="/api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp:1080/proxy/rewriteme">test<... (200; 5.711809ms) | |
Mar 6 02:43:27.569: INFO: (15) /api/v1/namespaces/proxy-7131/services/http:proxy-service-jw2qq:portname1/proxy/: foo (200; 5.900699ms) | |
Mar 6 02:43:27.570: INFO: (15) /api/v1/namespaces/proxy-7131/services/proxy-service-jw2qq:portname2/proxy/: bar (200; 6.051572ms) | |
Mar 6 02:43:27.571: INFO: (15) /api/v1/namespaces/proxy-7131/pods/http:proxy-service-jw2qq-p6vtp:1080/proxy/: <a href="/api/v1/namespaces/proxy-7131/pods/http:proxy-service-jw2qq-p6vtp:1080/proxy/rewriteme">... (200; 7.606199ms) | |
Mar 6 02:43:27.571: INFO: (15) /api/v1/namespaces/proxy-7131/services/https:proxy-service-jw2qq:tlsportname1/proxy/: tls baz (200; 7.802908ms) | |
Mar 6 02:43:27.571: INFO: (15) /api/v1/namespaces/proxy-7131/services/proxy-service-jw2qq:portname1/proxy/: foo (200; 7.453391ms) | |
Mar 6 02:43:27.571: INFO: (15) /api/v1/namespaces/proxy-7131/pods/https:proxy-service-jw2qq-p6vtp:462/proxy/: tls qux (200; 7.636269ms) | |
Mar 6 02:43:27.571: INFO: (15) /api/v1/namespaces/proxy-7131/services/https:proxy-service-jw2qq:tlsportname2/proxy/: tls qux (200; 7.647343ms) | |
Mar 6 02:43:27.572: INFO: (15) /api/v1/namespaces/proxy-7131/services/http:proxy-service-jw2qq:portname2/proxy/: bar (200; 8.281613ms) | |
Mar 6 02:43:27.572: INFO: (15) /api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp/proxy/: <a href="/api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp/proxy/rewriteme">test</a> (200; 8.234843ms) | |
Mar 6 02:43:27.576: INFO: (16) /api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp/proxy/: <a href="/api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp/proxy/rewriteme">test</a> (200; 3.677909ms) | |
Mar 6 02:43:27.576: INFO: (16) /api/v1/namespaces/proxy-7131/pods/https:proxy-service-jw2qq-p6vtp:443/proxy/: <a href="/api/v1/namespaces/proxy-7131/pods/https:proxy-service-jw2qq-p6vtp:443/proxy/tlsrewritem... (200; 4.333544ms) | |
Mar 6 02:43:27.577: INFO: (16) /api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp:160/proxy/: foo (200; 4.806524ms) | |
Mar 6 02:43:27.578: INFO: (16) /api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp:1080/proxy/: <a href="/api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp:1080/proxy/rewriteme">test<... (200; 6.037482ms) | |
Mar 6 02:43:27.578: INFO: (16) /api/v1/namespaces/proxy-7131/pods/http:proxy-service-jw2qq-p6vtp:1080/proxy/: <a href="/api/v1/namespaces/proxy-7131/pods/http:proxy-service-jw2qq-p6vtp:1080/proxy/rewriteme">... (200; 5.722077ms) | |
Mar 6 02:43:27.578: INFO: (16) /api/v1/namespaces/proxy-7131/services/proxy-service-jw2qq:portname1/proxy/: foo (200; 6.55223ms) | |
Mar 6 02:43:27.579: INFO: (16) /api/v1/namespaces/proxy-7131/pods/http:proxy-service-jw2qq-p6vtp:162/proxy/: bar (200; 6.635868ms) | |
Mar 6 02:43:27.579: INFO: (16) /api/v1/namespaces/proxy-7131/services/https:proxy-service-jw2qq:tlsportname1/proxy/: tls baz (200; 7.146482ms) | |
Mar 6 02:43:27.579: INFO: (16) /api/v1/namespaces/proxy-7131/pods/https:proxy-service-jw2qq-p6vtp:460/proxy/: tls baz (200; 6.8989ms) | |
Mar 6 02:43:27.580: INFO: (16) /api/v1/namespaces/proxy-7131/pods/http:proxy-service-jw2qq-p6vtp:160/proxy/: foo (200; 7.633971ms) | |
Mar 6 02:43:27.580: INFO: (16) /api/v1/namespaces/proxy-7131/services/http:proxy-service-jw2qq:portname1/proxy/: foo (200; 7.727348ms) | |
Mar 6 02:43:27.581: INFO: (16) /api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp:162/proxy/: bar (200; 8.027032ms) | |
Mar 6 02:43:27.581: INFO: (16) /api/v1/namespaces/proxy-7131/services/http:proxy-service-jw2qq:portname2/proxy/: bar (200; 8.384209ms) | |
Mar 6 02:43:27.581: INFO: (16) /api/v1/namespaces/proxy-7131/pods/https:proxy-service-jw2qq-p6vtp:462/proxy/: tls qux (200; 8.190723ms) | |
Mar 6 02:43:27.582: INFO: (16) /api/v1/namespaces/proxy-7131/services/proxy-service-jw2qq:portname2/proxy/: bar (200; 9.317653ms) | |
Mar 6 02:43:27.582: INFO: (16) /api/v1/namespaces/proxy-7131/services/https:proxy-service-jw2qq:tlsportname2/proxy/: tls qux (200; 9.447303ms) | |
Mar 6 02:43:27.584: INFO: (17) /api/v1/namespaces/proxy-7131/pods/https:proxy-service-jw2qq-p6vtp:462/proxy/: tls qux (200; 2.239256ms) | |
Mar 6 02:43:27.584: INFO: (17) /api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp/proxy/: <a href="/api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp/proxy/rewriteme">test</a> (200; 2.184016ms) | |
Mar 6 02:43:27.586: INFO: (17) /api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp:162/proxy/: bar (200; 2.926124ms) | |
Mar 6 02:43:27.586: INFO: (17) /api/v1/namespaces/proxy-7131/pods/http:proxy-service-jw2qq-p6vtp:160/proxy/: foo (200; 3.219131ms) | |
Mar 6 02:43:27.587: INFO: (17) /api/v1/namespaces/proxy-7131/pods/http:proxy-service-jw2qq-p6vtp:162/proxy/: bar (200; 3.80036ms) | |
Mar 6 02:43:27.589: INFO: (17) /api/v1/namespaces/proxy-7131/pods/http:proxy-service-jw2qq-p6vtp:1080/proxy/: <a href="/api/v1/namespaces/proxy-7131/pods/http:proxy-service-jw2qq-p6vtp:1080/proxy/rewriteme">... (200; 5.77971ms) | |
Mar 6 02:43:27.589: INFO: (17) /api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp:1080/proxy/: <a href="/api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp:1080/proxy/rewriteme">test<... (200; 6.471558ms) | |
Mar 6 02:43:27.589: INFO: (17) /api/v1/namespaces/proxy-7131/services/proxy-service-jw2qq:portname1/proxy/: foo (200; 6.703507ms) | |
Mar 6 02:43:27.589: INFO: (17) /api/v1/namespaces/proxy-7131/services/http:proxy-service-jw2qq:portname2/proxy/: bar (200; 6.428621ms) | |
Mar 6 02:43:27.590: INFO: (17) /api/v1/namespaces/proxy-7131/services/proxy-service-jw2qq:portname2/proxy/: bar (200; 6.69784ms) | |
Mar 6 02:43:27.590: INFO: (17) /api/v1/namespaces/proxy-7131/services/https:proxy-service-jw2qq:tlsportname2/proxy/: tls qux (200; 8.124954ms) | |
Mar 6 02:43:27.591: INFO: (17) /api/v1/namespaces/proxy-7131/pods/https:proxy-service-jw2qq-p6vtp:443/proxy/: <a href="/api/v1/namespaces/proxy-7131/pods/https:proxy-service-jw2qq-p6vtp:443/proxy/tlsrewritem... (200; 8.185084ms) | |
Mar 6 02:43:27.591: INFO: (17) /api/v1/namespaces/proxy-7131/pods/https:proxy-service-jw2qq-p6vtp:460/proxy/: tls baz (200; 7.977892ms) | |
Mar 6 02:43:27.591: INFO: (17) /api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp:160/proxy/: foo (200; 8.496337ms) | |
Mar 6 02:43:27.592: INFO: (17) /api/v1/namespaces/proxy-7131/services/http:proxy-service-jw2qq:portname1/proxy/: foo (200; 9.093624ms) | |
Mar 6 02:43:27.592: INFO: (17) /api/v1/namespaces/proxy-7131/services/https:proxy-service-jw2qq:tlsportname1/proxy/: tls baz (200; 9.116111ms) | |
Mar 6 02:43:27.596: INFO: (18) /api/v1/namespaces/proxy-7131/services/https:proxy-service-jw2qq:tlsportname2/proxy/: tls qux (200; 3.637097ms) | |
Mar 6 02:43:27.596: INFO: (18) /api/v1/namespaces/proxy-7131/pods/https:proxy-service-jw2qq-p6vtp:462/proxy/: tls qux (200; 3.743761ms) | |
Mar 6 02:43:27.596: INFO: (18) /api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp:1080/proxy/: <a href="/api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp:1080/proxy/rewriteme">test<... (200; 3.509488ms) | |
Mar 6 02:43:27.596: INFO: (18) /api/v1/namespaces/proxy-7131/pods/https:proxy-service-jw2qq-p6vtp:443/proxy/: <a href="/api/v1/namespaces/proxy-7131/pods/https:proxy-service-jw2qq-p6vtp:443/proxy/tlsrewritem... (200; 3.638907ms) | |
Mar 6 02:43:27.596: INFO: (18) /api/v1/namespaces/proxy-7131/pods/http:proxy-service-jw2qq-p6vtp:1080/proxy/: <a href="/api/v1/namespaces/proxy-7131/pods/http:proxy-service-jw2qq-p6vtp:1080/proxy/rewriteme">... (200; 3.656624ms) | |
Mar 6 02:43:27.598: INFO: (18) /api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp:162/proxy/: bar (200; 4.978599ms) | |
Mar 6 02:43:27.598: INFO: (18) /api/v1/namespaces/proxy-7131/services/proxy-service-jw2qq:portname1/proxy/: foo (200; 5.833629ms) | |
Mar 6 02:43:27.599: INFO: (18) /api/v1/namespaces/proxy-7131/pods/https:proxy-service-jw2qq-p6vtp:460/proxy/: tls baz (200; 5.896292ms) | |
Mar 6 02:43:27.600: INFO: (18) /api/v1/namespaces/proxy-7131/services/proxy-service-jw2qq:portname2/proxy/: bar (200; 6.981053ms) | |
Mar 6 02:43:27.600: INFO: (18) /api/v1/namespaces/proxy-7131/pods/http:proxy-service-jw2qq-p6vtp:162/proxy/: bar (200; 7.067662ms) | |
Mar 6 02:43:27.600: INFO: (18) /api/v1/namespaces/proxy-7131/services/https:proxy-service-jw2qq:tlsportname1/proxy/: tls baz (200; 7.282525ms) | |
Mar 6 02:43:27.600: INFO: (18) /api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp/proxy/: <a href="/api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp/proxy/rewriteme">test</a> (200; 7.616993ms) | |
Mar 6 02:43:27.601: INFO: (18) /api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp:160/proxy/: foo (200; 8.596923ms) | |
Mar 6 02:43:27.601: INFO: (18) /api/v1/namespaces/proxy-7131/services/http:proxy-service-jw2qq:portname2/proxy/: bar (200; 8.401555ms) | |
Mar 6 02:43:27.601: INFO: (18) /api/v1/namespaces/proxy-7131/pods/http:proxy-service-jw2qq-p6vtp:160/proxy/: foo (200; 8.786507ms) | |
Mar 6 02:43:27.602: INFO: (18) /api/v1/namespaces/proxy-7131/services/http:proxy-service-jw2qq:portname1/proxy/: foo (200; 8.980343ms) | |
Mar 6 02:43:27.611: INFO: (19) /api/v1/namespaces/proxy-7131/pods/https:proxy-service-jw2qq-p6vtp:443/proxy/: <a href="/api/v1/namespaces/proxy-7131/pods/https:proxy-service-jw2qq-p6vtp:443/proxy/tlsrewritem... (200; 8.797235ms) | |
Mar 6 02:43:27.611: INFO: (19) /api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp:162/proxy/: bar (200; 8.812818ms) | |
Mar 6 02:43:27.611: INFO: (19) /api/v1/namespaces/proxy-7131/pods/http:proxy-service-jw2qq-p6vtp:162/proxy/: bar (200; 9.113857ms) | |
Mar 6 02:43:27.611: INFO: (19) /api/v1/namespaces/proxy-7131/pods/https:proxy-service-jw2qq-p6vtp:460/proxy/: tls baz (200; 9.268957ms) | |
Mar 6 02:43:27.611: INFO: (19) /api/v1/namespaces/proxy-7131/pods/http:proxy-service-jw2qq-p6vtp:160/proxy/: foo (200; 9.564676ms) | |
Mar 6 02:43:27.611: INFO: (19) /api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp/proxy/: <a href="/api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp/proxy/rewriteme">test</a> (200; 9.21287ms) | |
Mar 6 02:43:27.612: INFO: (19) /api/v1/namespaces/proxy-7131/pods/http:proxy-service-jw2qq-p6vtp:1080/proxy/: <a href="/api/v1/namespaces/proxy-7131/pods/http:proxy-service-jw2qq-p6vtp:1080/proxy/rewriteme">... (200; 9.314835ms) | |
Mar 6 02:43:27.613: INFO: (19) /api/v1/namespaces/proxy-7131/pods/https:proxy-service-jw2qq-p6vtp:462/proxy/: tls qux (200; 10.20391ms) | |
Mar 6 02:43:27.613: INFO: (19) /api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp:160/proxy/: foo (200; 10.335882ms) | |
Mar 6 02:43:27.613: INFO: (19) /api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp:1080/proxy/: <a href="/api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp:1080/proxy/rewriteme">test<... (200; 10.279634ms) | |
Mar 6 02:43:27.613: INFO: (19) /api/v1/namespaces/proxy-7131/services/https:proxy-service-jw2qq:tlsportname2/proxy/: tls qux (200; 11.108106ms) | |
Mar 6 02:43:27.613: INFO: (19) /api/v1/namespaces/proxy-7131/services/proxy-service-jw2qq:portname1/proxy/: foo (200; 11.098982ms) | |
Mar 6 02:43:27.614: INFO: (19) /api/v1/namespaces/proxy-7131/services/http:proxy-service-jw2qq:portname1/proxy/: foo (200; 11.630253ms) | |
Mar 6 02:43:27.615: INFO: (19) /api/v1/namespaces/proxy-7131/services/proxy-service-jw2qq:portname2/proxy/: bar (200; 13.247151ms) | |
Mar 6 02:43:27.621: INFO: (19) /api/v1/namespaces/proxy-7131/services/http:proxy-service-jw2qq:portname2/proxy/: bar (200; 19.348249ms) | |
Mar 6 02:43:27.622: INFO: (19) /api/v1/namespaces/proxy-7131/services/https:proxy-service-jw2qq:tlsportname1/proxy/: tls baz (200; 20.210076ms) | |
STEP: deleting ReplicationController proxy-service-jw2qq in namespace proxy-7131, will wait for the garbage collector to delete the pods | |
Mar 6 02:43:27.679: INFO: Deleting ReplicationController proxy-service-jw2qq took: 4.632894ms | |
Mar 6 02:43:28.279: INFO: Terminating ReplicationController proxy-service-jw2qq pods took: 600.155129ms | |
[AfterEach] version v1 | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 | |
Mar 6 02:43:35.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
STEP: Destroying namespace "proxy-7131" for this suite. | |
• [SLOW TEST:18.022 seconds] | |
[sig-network] Proxy | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 | |
version v1 | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:57 | |
should proxy through a service and a pod [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
------------------------------ | |
{"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance]","total":278,"completed":18,"skipped":282,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]"]} | |
SSSSSSSSSSSSSSSSSSSSSSSSSS | |
------------------------------ | |
[sig-apps] ReplicationController | |
should release no longer matching pods [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
[BeforeEach] [sig-apps] ReplicationController | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 | |
STEP: Creating a kubernetes client | |
Mar 6 02:43:35.186: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 | |
STEP: Building a namespace api object, basename replication-controller | |
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in replication-controller-3457 | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] should release no longer matching pods [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
STEP: Given a ReplicationController is created | |
STEP: When the matched label of one of its pods change | |
Mar 6 02:43:35.325: INFO: Pod name pod-release: Found 0 pods out of 1 | |
Mar 6 02:43:40.335: INFO: Pod name pod-release: Found 1 pods out of 1 | |
STEP: Then the pod is released | |
[AfterEach] [sig-apps] ReplicationController | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 | |
Mar 6 02:43:41.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
STEP: Destroying namespace "replication-controller-3457" for this suite. | |
• [SLOW TEST:6.212 seconds] | |
[sig-apps] ReplicationController | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 | |
should release no longer matching pods [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
------------------------------ | |
{"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":278,"completed":19,"skipped":308,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]"]} | |
SSSSSSSSSSSSSSSSSSSSSSS | |
------------------------------ | |
[sig-api-machinery] ResourceQuota | |
should be able to update and delete ResourceQuota. [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
[BeforeEach] [sig-api-machinery] ResourceQuota | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 | |
STEP: Creating a kubernetes client | |
Mar 6 02:43:41.398: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 | |
STEP: Building a namespace api object, basename resourcequota | |
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in resourcequota-430 | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] should be able to update and delete ResourceQuota. [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
STEP: Creating a ResourceQuota | |
STEP: Getting a ResourceQuota | |
STEP: Updating a ResourceQuota | |
STEP: Verifying a ResourceQuota was modified | |
STEP: Deleting a ResourceQuota | |
STEP: Verifying the deleted ResourceQuota | |
[AfterEach] [sig-api-machinery] ResourceQuota | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 | |
Mar 6 02:43:41.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
STEP: Destroying namespace "resourcequota-430" for this suite. | |
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":278,"completed":20,"skipped":331,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]"]} | |
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS | |
------------------------------ | |
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class | |
should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
[BeforeEach] [k8s.io] [sig-node] Pods Extended | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 | |
STEP: Creating a kubernetes client | |
Mar 6 02:43:41.563: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 | |
STEP: Building a namespace api object, basename pods | |
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pods-8136 | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] [k8s.io] Pods Set QOS Class | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:178 | |
[It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
STEP: creating the pod | |
STEP: submitting the pod to kubernetes | |
STEP: verifying QOS class is set on the pod | |
[AfterEach] [k8s.io] [sig-node] Pods Extended | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 | |
Mar 6 02:43:41.709: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
STEP: Destroying namespace "pods-8136" for this suite. | |
•{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":278,"completed":21,"skipped":369,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]"]} | |
SS | |
------------------------------ | |
[sig-storage] Downward API volume | |
should update annotations on modification [NodeConformance] [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
[BeforeEach] [sig-storage] Downward API volume | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 | |
STEP: Creating a kubernetes client | |
Mar 6 02:43:41.722: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 | |
STEP: Building a namespace api object, basename downward-api | |
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-9454 | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] [sig-storage] Downward API volume | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 | |
[It] should update annotations on modification [NodeConformance] [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
STEP: Creating the pod | |
Mar 6 02:43:44.469: INFO: Successfully updated pod "annotationupdate8dccc44e-23b2-438e-818c-7e0ea5200f23" | |
[AfterEach] [sig-storage] Downward API volume | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 | |
Mar 6 02:43:48.501: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
STEP: Destroying namespace "downward-api-9454" for this suite. | |
• [SLOW TEST:6.790 seconds] | |
[sig-storage] Downward API volume | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 | |
should update annotations on modification [NodeConformance] [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
------------------------------ | |
{"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":22,"skipped":371,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]"]} | |
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS | |
------------------------------ | |
[sig-storage] Secrets | |
should be consumable from pods in volume with mappings [NodeConformance] [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
[BeforeEach] [sig-storage] Secrets | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 | |
STEP: Creating a kubernetes client | |
Mar 6 02:43:48.512: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 | |
STEP: Building a namespace api object, basename secrets | |
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secrets-742 | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
STEP: Creating secret with name secret-test-map-4924016d-63ed-44f9-9335-0167571a0a67 | |
STEP: Creating a pod to test consume secrets | |
Mar 6 02:43:48.663: INFO: Waiting up to 5m0s for pod "pod-secrets-e6da0ef9-6fed-40e2-bf61-61d7a43f9b87" in namespace "secrets-742" to be "success or failure" | |
Mar 6 02:43:48.665: INFO: Pod "pod-secrets-e6da0ef9-6fed-40e2-bf61-61d7a43f9b87": Phase="Pending", Reason="", readiness=false. Elapsed: 2.858025ms | |
Mar 6 02:43:50.669: INFO: Pod "pod-secrets-e6da0ef9-6fed-40e2-bf61-61d7a43f9b87": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006411129s | |
STEP: Saw pod success | |
Mar 6 02:43:50.669: INFO: Pod "pod-secrets-e6da0ef9-6fed-40e2-bf61-61d7a43f9b87" satisfied condition "success or failure" | |
Mar 6 02:43:50.672: INFO: Trying to get logs from node worker02 pod pod-secrets-e6da0ef9-6fed-40e2-bf61-61d7a43f9b87 container secret-volume-test: <nil> | |
STEP: delete the pod | |
Mar 6 02:43:50.686: INFO: Waiting for pod pod-secrets-e6da0ef9-6fed-40e2-bf61-61d7a43f9b87 to disappear | |
Mar 6 02:43:50.689: INFO: Pod pod-secrets-e6da0ef9-6fed-40e2-bf61-61d7a43f9b87 no longer exists | |
[AfterEach] [sig-storage] Secrets | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 | |
Mar 6 02:43:50.689: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
STEP: Destroying namespace "secrets-742" for this suite. | |
•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":23,"skipped":419,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]"]} | |
SSSS | |
------------------------------ | |
[sig-cli] Kubectl client Kubectl version | |
should check is all data is printed [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
[BeforeEach] [sig-cli] Kubectl client | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 | |
STEP: Creating a kubernetes client | |
Mar 6 02:43:50.699: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 | |
STEP: Building a namespace api object, basename kubectl | |
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-7010 | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] [sig-cli] Kubectl client | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 | |
[It] should check is all data is printed [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
Mar 6 02:43:50.841: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 version' | |
Mar 6 02:43:50.894: INFO: stderr: "" | |
Mar 6 02:43:50.894: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.3\", GitCommit:\"06ad960bfd03b39c8310aaf92d1e7c12ce618213\", GitTreeState:\"clean\", BuildDate:\"2020-02-11T18:14:22Z\", GoVersion:\"go1.13.6\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.3\", GitCommit:\"06ad960bfd03b39c8310aaf92d1e7c12ce618213\", GitTreeState:\"clean\", BuildDate:\"2020-02-11T18:07:13Z\", GoVersion:\"go1.13.6\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" | |
[AfterEach] [sig-cli] Kubectl client | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 | |
Mar 6 02:43:50.894: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
STEP: Destroying namespace "kubectl-7010" for this suite. | |
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":278,"completed":24,"skipped":423,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]"]} | |
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS | |
------------------------------ | |
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] | |
Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
[BeforeEach] [sig-apps] StatefulSet | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 | |
STEP: Creating a kubernetes client | |
Mar 6 02:43:50.904: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 | |
STEP: Building a namespace api object, basename statefulset | |
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in statefulset-6960 | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] [sig-apps] StatefulSet | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 | |
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 | |
STEP: Creating service test in namespace statefulset-6960 | |
[It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
STEP: Creating stateful set ss in namespace statefulset-6960 | |
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-6960 | |
Mar 6 02:43:51.049: INFO: Found 0 stateful pods, waiting for 1 | |
Mar 6 02:44:01.053: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true | |
STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod | |
Mar 6 02:44:01.055: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 exec --namespace=statefulset-6960 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' | |
Mar 6 02:44:01.237: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" | |
Mar 6 02:44:01.237: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" | |
Mar 6 02:44:01.237: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' | |
Mar 6 02:44:01.239: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true | |
Mar 6 02:44:11.242: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false | |
Mar 6 02:44:11.242: INFO: Waiting for statefulset status.replicas updated to 0 | |
Mar 6 02:44:11.252: INFO: POD NODE PHASE GRACE CONDITIONS | |
Mar 6 02:44:11.252: INFO: ss-0 worker02 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-06 02:43:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-06 02:44:01 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-06 02:44:01 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-06 02:43:51 +0000 UTC }] | |
Mar 6 02:44:11.252: INFO: | |
Mar 6 02:44:11.252: INFO: StatefulSet ss has not reached scale 3, at 1 | |
Mar 6 02:44:12.255: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.99791637s | |
Mar 6 02:44:13.257: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.99504123s | |
Mar 6 02:44:14.260: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.992370488s | |
Mar 6 02:44:15.263: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.989731536s | |
Mar 6 02:44:16.266: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.986861957s | |
Mar 6 02:44:17.269: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.984250986s | |
Mar 6 02:44:18.271: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.981284768s | |
Mar 6 02:44:19.274: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.978504068s | |
Mar 6 02:44:20.277: INFO: Verifying statefulset ss doesn't scale past 3 for another 975.959906ms | |
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-6960 | |
Mar 6 02:44:21.280: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 exec --namespace=statefulset-6960 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' | |
Mar 6 02:44:21.455: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" | |
Mar 6 02:44:21.455: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" | |
Mar 6 02:44:21.455: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' | |
Mar 6 02:44:21.455: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 exec --namespace=statefulset-6960 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' | |
Mar 6 02:44:21.576: INFO: rc: 1 | |
Mar 6 02:44:21.576: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 exec --namespace=statefulset-6960 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: | |
Command stdout: | |
stderr: | |
error: unable to upgrade connection: container not found ("webserver") | |
error: | |
exit status 1 | |
Mar 6 02:44:31.576: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 exec --namespace=statefulset-6960 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' | |
Mar 6 02:44:31.753: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n" | |
Mar 6 02:44:31.753: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" | |
Mar 6 02:44:31.753: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' | |
Mar 6 02:44:31.753: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 exec --namespace=statefulset-6960 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' | |
Mar 6 02:44:31.922: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n" | |
Mar 6 02:44:31.922: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" | |
Mar 6 02:44:31.922: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' | |
Mar 6 02:44:31.926: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true | |
Mar 6 02:44:31.926: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true | |
Mar 6 02:44:31.926: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true | |
STEP: Scale down will not halt with unhealthy stateful pod | |
Mar 6 02:44:31.928: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 exec --namespace=statefulset-6960 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' | |
Mar 6 02:44:32.141: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" | |
Mar 6 02:44:32.141: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" | |
Mar 6 02:44:32.141: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' | |
Mar 6 02:44:32.141: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 exec --namespace=statefulset-6960 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' | |
Mar 6 02:44:32.355: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" | |
Mar 6 02:44:32.355: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" | |
Mar 6 02:44:32.355: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' | |
Mar 6 02:44:32.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 exec --namespace=statefulset-6960 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' | |
Mar 6 02:44:32.529: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" | |
Mar 6 02:44:32.529: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" | |
Mar 6 02:44:32.529: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' | |
Mar 6 02:44:32.529: INFO: Waiting for statefulset status.replicas updated to 0 | |
Mar 6 02:44:32.532: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 | |
Mar 6 02:44:42.537: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false | |
Mar 6 02:44:42.537: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false | |
Mar 6 02:44:42.537: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false | |
Mar 6 02:44:42.545: INFO: POD NODE PHASE GRACE CONDITIONS | |
Mar 6 02:44:42.545: INFO: ss-0 worker02 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-06 02:43:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-06 02:44:32 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-06 02:44:32 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-06 02:43:51 +0000 UTC }] | |
Mar 6 02:44:42.545: INFO: ss-1 worker01 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-06 02:44:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-06 02:44:32 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-06 02:44:32 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-06 02:44:11 +0000 UTC }] | |
Mar 6 02:44:42.545: INFO: ss-2 worker02 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-06 02:44:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-06 02:44:32 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-06 02:44:32 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-06 02:44:11 +0000 UTC }] | |
Mar 6 02:44:42.545: INFO: | |
Mar 6 02:44:42.545: INFO: StatefulSet ss has not reached scale 0, at 3 | |
Mar 6 02:44:43.548: INFO: POD NODE PHASE GRACE CONDITIONS | |
Mar 6 02:44:43.548: INFO: ss-0 worker02 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-06 02:43:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-06 02:44:32 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-06 02:44:32 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-06 02:43:51 +0000 UTC }] | |
Mar 6 02:44:43.548: INFO: ss-1 worker01 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-06 02:44:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-06 02:44:32 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-06 02:44:32 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-06 02:44:11 +0000 UTC }] | |
Mar 6 02:44:43.548: INFO: ss-2 worker02 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-06 02:44:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-06 02:44:32 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-06 02:44:32 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-06 02:44:11 +0000 UTC }] | |
Mar 6 02:44:43.548: INFO: | |
Mar 6 02:44:43.548: INFO: StatefulSet ss has not reached scale 0, at 3 | |
Mar 6 02:44:44.550: INFO: Verifying statefulset ss doesn't scale past 0 for another 7.994853321s | |
Mar 6 02:44:45.552: INFO: Verifying statefulset ss doesn't scale past 0 for another 6.992371171s | |
Mar 6 02:44:46.555: INFO: Verifying statefulset ss doesn't scale past 0 for another 5.990157101s | |
Mar 6 02:44:47.557: INFO: Verifying statefulset ss doesn't scale past 0 for another 4.987594658s | |
Mar 6 02:44:48.560: INFO: Verifying statefulset ss doesn't scale past 0 for another 3.985152476s | |
Mar 6 02:44:49.562: INFO: Verifying statefulset ss doesn't scale past 0 for another 2.98281703s | |
Mar 6 02:44:50.565: INFO: Verifying statefulset ss doesn't scale past 0 for another 1.980113327s | |
Mar 6 02:44:51.567: INFO: Verifying statefulset ss doesn't scale past 0 for another 977.904636ms | |
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-6960 | |
Mar 6 02:44:52.570: INFO: Scaling statefulset ss to 0 | |
Mar 6 02:44:52.576: INFO: Waiting for statefulset status.replicas updated to 0 | |
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 | |
Mar 6 02:44:52.578: INFO: Deleting all statefulset in ns statefulset-6960 | |
Mar 6 02:44:52.580: INFO: Scaling statefulset ss to 0 | |
Mar 6 02:44:52.586: INFO: Waiting for statefulset status.replicas updated to 0 | |
Mar 6 02:44:52.587: INFO: Deleting statefulset ss | |
[AfterEach] [sig-apps] StatefulSet | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 | |
Mar 6 02:44:52.596: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
STEP: Destroying namespace "statefulset-6960" for this suite. | |
• [SLOW TEST:61.698 seconds] | |
[sig-apps] StatefulSet | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 | |
[k8s.io] Basic StatefulSet functionality [StatefulSetBasic] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 | |
Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
------------------------------ | |
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":278,"completed":25,"skipped":465,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]"]} | |
SSS | |
------------------------------ | |
[k8s.io] Probing container | |
should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
[BeforeEach] [k8s.io] Probing container | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 | |
STEP: Creating a kubernetes client | |
Mar 6 02:44:52.603: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 | |
STEP: Building a namespace api object, basename container-probe | |
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-probe-8872 | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] [k8s.io] Probing container | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 | |
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
STEP: Creating pod busybox-e04aa3bf-2c1a-451b-88e3-9b5a7ac027c5 in namespace container-probe-8872 | |
Mar 6 02:44:54.745: INFO: Started pod busybox-e04aa3bf-2c1a-451b-88e3-9b5a7ac027c5 in namespace container-probe-8872 | |
STEP: checking the pod's current state and verifying that restartCount is present | |
Mar 6 02:44:54.747: INFO: Initial restart count of pod busybox-e04aa3bf-2c1a-451b-88e3-9b5a7ac027c5 is 0 | |
STEP: deleting the pod | |
[AfterEach] [k8s.io] Probing container | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 | |
Mar 6 02:48:55.090: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
STEP: Destroying namespace "container-probe-8872" for this suite. | |
• [SLOW TEST:242.521 seconds] | |
[k8s.io] Probing container | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 | |
should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
------------------------------ | |
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":26,"skipped":468,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]"]} | |
SSSS | |
------------------------------ | |
[k8s.io] Kubelet when scheduling a read only busybox container | |
should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
[BeforeEach] [k8s.io] Kubelet | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 | |
STEP: Creating a kubernetes client | |
Mar 6 02:48:55.124: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 | |
STEP: Building a namespace api object, basename kubelet-test | |
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubelet-test-9953 | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] [k8s.io] Kubelet | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 | |
[It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
[AfterEach] [k8s.io] Kubelet | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 | |
Mar 6 02:48:57.289: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
STEP: Destroying namespace "kubelet-test-9953" for this suite. | |
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":27,"skipped":472,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]"]} | |
SSSSSSSS | |
------------------------------ | |
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] | |
should perform rolling updates and roll backs of template modifications [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
[BeforeEach] [sig-apps] StatefulSet | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 | |
STEP: Creating a kubernetes client | |
Mar 6 02:48:57.297: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 | |
STEP: Building a namespace api object, basename statefulset | |
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in statefulset-9937 | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] [sig-apps] StatefulSet | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 | |
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 | |
STEP: Creating service test in namespace statefulset-9937 | |
[It] should perform rolling updates and roll backs of template modifications [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
STEP: Creating a new StatefulSet | |
Mar 6 02:48:57.523: INFO: Found 0 stateful pods, waiting for 3 | |
Mar 6 02:49:07.526: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true | |
Mar 6 02:49:07.526: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true | |
Mar 6 02:49:07.526: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true | |
Mar 6 02:49:07.535: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 exec --namespace=statefulset-9937 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' | |
Mar 6 02:49:07.724: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" | |
Mar 6 02:49:07.724: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" | |
Mar 6 02:49:07.724: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' | |
STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine | |
Mar 6 02:49:17.751: INFO: Updating stateful set ss2 | |
STEP: Creating a new revision | |
STEP: Updating Pods in reverse ordinal order | |
Mar 6 02:49:27.763: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 exec --namespace=statefulset-9937 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' | |
Mar 6 02:49:27.940: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" | |
Mar 6 02:49:27.940: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" | |
Mar 6 02:49:27.940: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' | |
Mar 6 02:49:37.953: INFO: Waiting for StatefulSet statefulset-9937/ss2 to complete update | |
Mar 6 02:49:37.953: INFO: Waiting for Pod statefulset-9937/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 | |
Mar 6 02:49:37.953: INFO: Waiting for Pod statefulset-9937/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 | |
Mar 6 02:49:47.959: INFO: Waiting for StatefulSet statefulset-9937/ss2 to complete update | |
Mar 6 02:49:47.959: INFO: Waiting for Pod statefulset-9937/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 | |
Mar 6 02:49:47.959: INFO: Waiting for Pod statefulset-9937/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 | |
Mar 6 02:49:57.958: INFO: Waiting for StatefulSet statefulset-9937/ss2 to complete update | |
Mar 6 02:49:57.958: INFO: Waiting for Pod statefulset-9937/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 | |
Mar 6 02:50:07.958: INFO: Waiting for StatefulSet statefulset-9937/ss2 to complete update | |
Mar 6 02:50:07.958: INFO: Waiting for Pod statefulset-9937/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 | |
STEP: Rolling back to a previous revision | |
Mar 6 02:50:17.958: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 exec --namespace=statefulset-9937 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' | |
Mar 6 02:50:18.294: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" | |
Mar 6 02:50:18.294: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" | |
Mar 6 02:50:18.294: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' | |
Mar 6 02:50:28.322: INFO: Updating stateful set ss2 | |
STEP: Rolling back update in reverse ordinal order | |
Mar 6 02:50:38.335: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 exec --namespace=statefulset-9937 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' | |
Mar 6 02:50:38.518: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" | |
Mar 6 02:50:38.518: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" | |
Mar 6 02:50:38.518: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' | |
Mar 6 02:50:58.531: INFO: Waiting for StatefulSet statefulset-9937/ss2 to complete update | |
Mar 6 02:50:58.531: INFO: Waiting for Pod statefulset-9937/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 | |
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 | |
Mar 6 02:51:08.536: INFO: Deleting all statefulset in ns statefulset-9937 | |
Mar 6 02:51:08.538: INFO: Scaling statefulset ss2 to 0 | |
Mar 6 02:51:28.548: INFO: Waiting for statefulset status.replicas updated to 0 | |
Mar 6 02:51:28.551: INFO: Deleting statefulset ss2 | |
[AfterEach] [sig-apps] StatefulSet | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 | |
Mar 6 02:51:28.560: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
STEP: Destroying namespace "statefulset-9937" for this suite. | |
• [SLOW TEST:151.270 seconds] | |
[sig-apps] StatefulSet | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 | |
[k8s.io] Basic StatefulSet functionality [StatefulSetBasic] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 | |
should perform rolling updates and roll backs of template modifications [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
------------------------------ | |
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":278,"completed":28,"skipped":480,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]"]} | |
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS | |
------------------------------ | |
[sig-api-machinery] Watchers | |
should observe add, update, and delete watch notifications on configmaps [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
[BeforeEach] [sig-api-machinery] Watchers | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 | |
STEP: Creating a kubernetes client | |
Mar 6 02:51:28.567: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 | |
STEP: Building a namespace api object, basename watch | |
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in watch-7757 | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] should observe add, update, and delete watch notifications on configmaps [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
STEP: creating a watch on configmaps with label A | |
STEP: creating a watch on configmaps with label B | |
STEP: creating a watch on configmaps with label A or B | |
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification | |
Mar 6 02:51:28.700: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-7757 /api/v1/namespaces/watch-7757/configmaps/e2e-watch-test-configmap-a 4f40eb2b-59eb-4b6f-9803-b4845fb27fe9 6448 0 2020-03-06 02:51:28 +0000 UTC <nil> <nil> map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} | |
Mar 6 02:51:28.700: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-7757 /api/v1/namespaces/watch-7757/configmaps/e2e-watch-test-configmap-a 4f40eb2b-59eb-4b6f-9803-b4845fb27fe9 6448 0 2020-03-06 02:51:28 +0000 UTC <nil> <nil> map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} | |
STEP: modifying configmap A and ensuring the correct watchers observe the notification | |
Mar 6 02:51:38.709: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-7757 /api/v1/namespaces/watch-7757/configmaps/e2e-watch-test-configmap-a 4f40eb2b-59eb-4b6f-9803-b4845fb27fe9 6543 0 2020-03-06 02:51:28 +0000 UTC <nil> <nil> map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} | |
Mar 6 02:51:38.709: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-7757 /api/v1/namespaces/watch-7757/configmaps/e2e-watch-test-configmap-a 4f40eb2b-59eb-4b6f-9803-b4845fb27fe9 6543 0 2020-03-06 02:51:28 +0000 UTC <nil> <nil> map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} | |
STEP: modifying configmap A again and ensuring the correct watchers observe the notification | |
Mar 6 02:51:48.718: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-7757 /api/v1/namespaces/watch-7757/configmaps/e2e-watch-test-configmap-a 4f40eb2b-59eb-4b6f-9803-b4845fb27fe9 6572 0 2020-03-06 02:51:28 +0000 UTC <nil> <nil> map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} | |
Mar 6 02:51:48.718: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-7757 /api/v1/namespaces/watch-7757/configmaps/e2e-watch-test-configmap-a 4f40eb2b-59eb-4b6f-9803-b4845fb27fe9 6572 0 2020-03-06 02:51:28 +0000 UTC <nil> <nil> map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} | |
STEP: deleting configmap A and ensuring the correct watchers observe the notification | |
Mar 6 02:51:58.724: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-7757 /api/v1/namespaces/watch-7757/configmaps/e2e-watch-test-configmap-a 4f40eb2b-59eb-4b6f-9803-b4845fb27fe9 6601 0 2020-03-06 02:51:28 +0000 UTC <nil> <nil> map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} | |
Mar 6 02:51:58.724: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-7757 /api/v1/namespaces/watch-7757/configmaps/e2e-watch-test-configmap-a 4f40eb2b-59eb-4b6f-9803-b4845fb27fe9 6601 0 2020-03-06 02:51:28 +0000 UTC <nil> <nil> map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} | |
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification | |
Mar 6 02:52:08.730: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-7757 /api/v1/namespaces/watch-7757/configmaps/e2e-watch-test-configmap-b 3249b713-6008-4219-8f77-6469c3a14ca6 6630 0 2020-03-06 02:52:08 +0000 UTC <nil> <nil> map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} | |
Mar 6 02:52:08.730: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-7757 /api/v1/namespaces/watch-7757/configmaps/e2e-watch-test-configmap-b 3249b713-6008-4219-8f77-6469c3a14ca6 6630 0 2020-03-06 02:52:08 +0000 UTC <nil> <nil> map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} | |
STEP: deleting configmap B and ensuring the correct watchers observe the notification | |
Mar 6 02:52:18.736: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-7757 /api/v1/namespaces/watch-7757/configmaps/e2e-watch-test-configmap-b 3249b713-6008-4219-8f77-6469c3a14ca6 6659 0 2020-03-06 02:52:08 +0000 UTC <nil> <nil> map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} | |
Mar 6 02:52:18.736: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-7757 /api/v1/namespaces/watch-7757/configmaps/e2e-watch-test-configmap-b 3249b713-6008-4219-8f77-6469c3a14ca6 6659 0 2020-03-06 02:52:08 +0000 UTC <nil> <nil> map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} | |
[AfterEach] [sig-api-machinery] Watchers | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 | |
Mar 6 02:52:28.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
STEP: Destroying namespace "watch-7757" for this suite. | |
• [SLOW TEST:60.177 seconds] | |
[sig-api-machinery] Watchers | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 | |
should observe add, update, and delete watch notifications on configmaps [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
------------------------------ | |
{"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":278,"completed":29,"skipped":516,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]"]} | |
SSS | |
------------------------------ | |
[sig-apps] ReplicationController | |
should serve a basic image on each replica with a public image [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
[BeforeEach] [sig-apps] ReplicationController | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 | |
STEP: Creating a kubernetes client | |
Mar 6 02:52:28.744: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 | |
STEP: Building a namespace api object, basename replication-controller | |
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in replication-controller-5712 | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] should serve a basic image on each replica with a public image [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
STEP: Creating replication controller my-hostname-basic-29703e58-f167-4229-89c4-828675e6b62e | |
Mar 6 02:52:28.883: INFO: Pod name my-hostname-basic-29703e58-f167-4229-89c4-828675e6b62e: Found 0 pods out of 1 | |
Mar 6 02:52:33.885: INFO: Pod name my-hostname-basic-29703e58-f167-4229-89c4-828675e6b62e: Found 1 pods out of 1 | |
Mar 6 02:52:33.885: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-29703e58-f167-4229-89c4-828675e6b62e" are running | |
Mar 6 02:52:33.887: INFO: Pod "my-hostname-basic-29703e58-f167-4229-89c4-828675e6b62e-n7mjx" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-06 02:52:28 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-06 02:52:29 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-06 02:52:29 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-06 02:52:28 +0000 UTC Reason: Message:}]) | |
Mar 6 02:52:33.887: INFO: Trying to dial the pod | |
Mar 6 02:52:38.895: INFO: Controller my-hostname-basic-29703e58-f167-4229-89c4-828675e6b62e: Got expected result from replica 1 [my-hostname-basic-29703e58-f167-4229-89c4-828675e6b62e-n7mjx]: "my-hostname-basic-29703e58-f167-4229-89c4-828675e6b62e-n7mjx", 1 of 1 required successes so far | |
[AfterEach] [sig-apps] ReplicationController | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 | |
Mar 6 02:52:38.895: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
STEP: Destroying namespace "replication-controller-5712" for this suite. | |
• [SLOW TEST:10.158 seconds] | |
[sig-apps] ReplicationController | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 | |
should serve a basic image on each replica with a public image [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
------------------------------ | |
{"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":278,"completed":30,"skipped":519,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]"]} | |
SSSSSSSSS | |
------------------------------ | |
[sig-apps] ReplicationController | |
should surface a failure condition on a common issue like exceeded quota [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
[BeforeEach] [sig-apps] ReplicationController | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 | |
STEP: Creating a kubernetes client | |
Mar 6 02:52:38.902: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 | |
STEP: Building a namespace api object, basename replication-controller | |
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in replication-controller-2404 | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] should surface a failure condition on a common issue like exceeded quota [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
Mar 6 02:52:39.039: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace | |
STEP: Creating rc "condition-test" that asks for more than the allowed pod quota | |
STEP: Checking rc "condition-test" has the desired failure condition set | |
STEP: Scaling down rc "condition-test" to satisfy pod quota | |
Mar 6 02:52:41.060: INFO: Updating replication controller "condition-test" | |
STEP: Checking rc "condition-test" has no failure condition set | |
[AfterEach] [sig-apps] ReplicationController | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 | |
Mar 6 02:52:42.064: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
STEP: Destroying namespace "replication-controller-2404" for this suite. | |
•{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":278,"completed":31,"skipped":528,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]"]} | |
SSSSSSSSSSSSSSSSSSSSSSSSSSSS | |
------------------------------ | |
[sig-storage] Projected configMap | |
should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
[BeforeEach] [sig-storage] Projected configMap | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 | |
STEP: Creating a kubernetes client | |
Mar 6 02:52:42.072: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 | |
STEP: Building a namespace api object, basename projected | |
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-9984 | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
STEP: Creating configMap with name projected-configmap-test-volume-3202b067-4a22-4e48-b3d3-4c930b2a1c8a | |
STEP: Creating a pod to test consume configMaps | |
Mar 6 02:52:42.212: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-a1a3e3bd-8d08-41d1-839a-1cfd8ca9a5b4" in namespace "projected-9984" to be "success or failure" | |
Mar 6 02:52:42.219: INFO: Pod "pod-projected-configmaps-a1a3e3bd-8d08-41d1-839a-1cfd8ca9a5b4": Phase="Pending", Reason="", readiness=false. Elapsed: 7.202929ms | |
Mar 6 02:52:44.222: INFO: Pod "pod-projected-configmaps-a1a3e3bd-8d08-41d1-839a-1cfd8ca9a5b4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009666414s | |
STEP: Saw pod success | |
Mar 6 02:52:44.222: INFO: Pod "pod-projected-configmaps-a1a3e3bd-8d08-41d1-839a-1cfd8ca9a5b4" satisfied condition "success or failure" | |
Mar 6 02:52:44.223: INFO: Trying to get logs from node worker02 pod pod-projected-configmaps-a1a3e3bd-8d08-41d1-839a-1cfd8ca9a5b4 container projected-configmap-volume-test: <nil> | |
STEP: delete the pod | |
Mar 6 02:52:44.260: INFO: Waiting for pod pod-projected-configmaps-a1a3e3bd-8d08-41d1-839a-1cfd8ca9a5b4 to disappear | |
Mar 6 02:52:44.262: INFO: Pod pod-projected-configmaps-a1a3e3bd-8d08-41d1-839a-1cfd8ca9a5b4 no longer exists | |
[AfterEach] [sig-storage] Projected configMap | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 | |
Mar 6 02:52:44.262: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
STEP: Destroying namespace "projected-9984" for this suite. | |
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":32,"skipped":556,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]"]} | |
SSSSSSSSS | |
------------------------------ | |
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition | |
listing custom resource definition objects works [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 | |
STEP: Creating a kubernetes client | |
Mar 6 02:52:44.269: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 | |
STEP: Building a namespace api object, basename custom-resource-definition | |
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in custom-resource-definition-7311 | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] listing custom resource definition objects works [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
Mar 6 02:52:44.402: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 | |
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 | |
Mar 6 02:53:43.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
STEP: Destroying namespace "custom-resource-definition-7311" for this suite. | |
• [SLOW TEST:59.262 seconds] | |
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 | |
Simple CustomResourceDefinition | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:47 | |
listing custom resource definition objects works [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
------------------------------ | |
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":278,"completed":33,"skipped":565,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]"]} | |
SSSSS | |
------------------------------ | |
[sig-storage] HostPath | |
should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
[BeforeEach] [sig-storage] HostPath | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 | |
STEP: Creating a kubernetes client | |
Mar 6 02:53:43.531: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 | |
STEP: Building a namespace api object, basename hostpath | |
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in hostpath-4388 | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] [sig-storage] HostPath | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 | |
[It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
STEP: Creating a pod to test hostPath mode | |
Mar 6 02:53:43.713: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-4388" to be "success or failure" | |
Mar 6 02:53:43.726: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 13.00881ms | |
Mar 6 02:53:45.728: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.015481855s | |
STEP: Saw pod success | |
Mar 6 02:53:45.728: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" | |
Mar 6 02:53:45.730: INFO: Trying to get logs from node worker02 pod pod-host-path-test container test-container-1: <nil> | |
STEP: delete the pod | |
Mar 6 02:53:45.743: INFO: Waiting for pod pod-host-path-test to disappear | |
Mar 6 02:53:45.745: INFO: Pod pod-host-path-test no longer exists | |
[AfterEach] [sig-storage] HostPath | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 | |
Mar 6 02:53:45.745: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
STEP: Destroying namespace "hostpath-4388" for this suite. | |
•{"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":34,"skipped":570,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]"]} | |
SSSSSSSSSSSSSSSSSSSSSS | |
------------------------------ | |
[sig-storage] Subpath Atomic writer volumes | |
should support subpaths with secret pod [LinuxOnly] [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
[BeforeEach] [sig-storage] Subpath | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 | |
STEP: Creating a kubernetes client | |
Mar 6 02:53:45.752: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 | |
STEP: Building a namespace api object, basename subpath | |
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in subpath-1237 | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] Atomic writer volumes | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 | |
STEP: Setting up data | |
[It] should support subpaths with secret pod [LinuxOnly] [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
STEP: Creating pod pod-subpath-test-secret-4qb6 | |
STEP: Creating a pod to test atomic-volume-subpath | |
Mar 6 02:53:45.893: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-4qb6" in namespace "subpath-1237" to be "success or failure" | |
Mar 6 02:53:45.895: INFO: Pod "pod-subpath-test-secret-4qb6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.155046ms | |
Mar 6 02:53:47.898: INFO: Pod "pod-subpath-test-secret-4qb6": Phase="Running", Reason="", readiness=true. Elapsed: 2.004474667s | |
Mar 6 02:53:49.900: INFO: Pod "pod-subpath-test-secret-4qb6": Phase="Running", Reason="", readiness=true. Elapsed: 4.007174477s | |
Mar 6 02:53:51.904: INFO: Pod "pod-subpath-test-secret-4qb6": Phase="Running", Reason="", readiness=true. Elapsed: 6.010781542s | |
Mar 6 02:53:53.906: INFO: Pod "pod-subpath-test-secret-4qb6": Phase="Running", Reason="", readiness=true. Elapsed: 8.013306742s | |
Mar 6 02:53:55.911: INFO: Pod "pod-subpath-test-secret-4qb6": Phase="Running", Reason="", readiness=true. Elapsed: 10.017886111s | |
Mar 6 02:53:57.915: INFO: Pod "pod-subpath-test-secret-4qb6": Phase="Running", Reason="", readiness=true. Elapsed: 12.02171346s | |
Mar 6 02:53:59.918: INFO: Pod "pod-subpath-test-secret-4qb6": Phase="Running", Reason="", readiness=true. Elapsed: 14.024400054s | |
Mar 6 02:54:01.920: INFO: Pod "pod-subpath-test-secret-4qb6": Phase="Running", Reason="", readiness=true. Elapsed: 16.02720213s | |
Mar 6 02:54:03.926: INFO: Pod "pod-subpath-test-secret-4qb6": Phase="Running", Reason="", readiness=true. Elapsed: 18.032624592s | |
Mar 6 02:54:05.928: INFO: Pod "pod-subpath-test-secret-4qb6": Phase="Running", Reason="", readiness=true. Elapsed: 20.034802806s | |
Mar 6 02:54:07.932: INFO: Pod "pod-subpath-test-secret-4qb6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.03883423s | |
STEP: Saw pod success | |
Mar 6 02:54:07.932: INFO: Pod "pod-subpath-test-secret-4qb6" satisfied condition "success or failure" | |
Mar 6 02:54:07.936: INFO: Trying to get logs from node worker02 pod pod-subpath-test-secret-4qb6 container test-container-subpath-secret-4qb6: <nil> | |
STEP: delete the pod | |
Mar 6 02:54:07.969: INFO: Waiting for pod pod-subpath-test-secret-4qb6 to disappear | |
Mar 6 02:54:07.974: INFO: Pod pod-subpath-test-secret-4qb6 no longer exists | |
STEP: Deleting pod pod-subpath-test-secret-4qb6 | |
Mar 6 02:54:07.974: INFO: Deleting pod "pod-subpath-test-secret-4qb6" in namespace "subpath-1237" | |
[AfterEach] [sig-storage] Subpath | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 | |
Mar 6 02:54:07.985: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
STEP: Destroying namespace "subpath-1237" for this suite. | |
• [SLOW TEST:22.245 seconds] | |
[sig-storage] Subpath | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 | |
Atomic writer volumes | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 | |
should support subpaths with secret pod [LinuxOnly] [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
------------------------------ | |
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":278,"completed":35,"skipped":592,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]"]} | |
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS | |
------------------------------ | |
[sig-network] Networking Granular Checks: Pods | |
should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
[BeforeEach] [sig-network] Networking | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 | |
STEP: Creating a kubernetes client | |
Mar 6 02:54:07.998: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 | |
STEP: Building a namespace api object, basename pod-network-test | |
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pod-network-test-9728 | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
STEP: Performing setup for networking test in namespace pod-network-test-9728 | |
STEP: creating a selector | |
STEP: Creating the service pods in kubernetes | |
Mar 6 02:54:08.164: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable | |
STEP: Creating test pods | |
Mar 6 02:54:32.233: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.46:8080/dial?request=hostname&protocol=http&host=10.244.4.13&port=8080&tries=1'] Namespace:pod-network-test-9728 PodName:host-test-container-pod ContainerName:agnhost Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} | |
Mar 6 02:54:32.233: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 | |
Mar 6 02:54:32.352: INFO: Waiting for responses: map[] | |
Mar 6 02:54:32.355: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.46:8080/dial?request=hostname&protocol=http&host=10.244.3.45&port=8080&tries=1'] Namespace:pod-network-test-9728 PodName:host-test-container-pod ContainerName:agnhost Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} | |
Mar 6 02:54:32.355: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 | |
Mar 6 02:54:32.488: INFO: Waiting for responses: map[] | |
[AfterEach] [sig-network] Networking | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 | |
Mar 6 02:54:32.489: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
STEP: Destroying namespace "pod-network-test-9728" for this suite. | |
• [SLOW TEST:24.498 seconds] | |
[sig-network] Networking | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 | |
Granular Checks: Pods | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 | |
should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
------------------------------ | |
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":36,"skipped":657,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]"]} | |
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS | |
------------------------------ | |
[k8s.io] Docker Containers | |
should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
[BeforeEach] [k8s.io] Docker Containers | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 | |
STEP: Creating a kubernetes client | |
Mar 6 02:54:32.496: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 | |
STEP: Building a namespace api object, basename containers | |
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in containers-9856 | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
STEP: Creating a pod to test override command | |
Mar 6 02:54:32.644: INFO: Waiting up to 5m0s for pod "client-containers-295365a7-6596-445f-ae86-38d6e67b6527" in namespace "containers-9856" to be "success or failure" | |
Mar 6 02:54:32.646: INFO: Pod "client-containers-295365a7-6596-445f-ae86-38d6e67b6527": Phase="Pending", Reason="", readiness=false. Elapsed: 1.845193ms | |
Mar 6 02:54:34.652: INFO: Pod "client-containers-295365a7-6596-445f-ae86-38d6e67b6527": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007862944s | |
STEP: Saw pod success | |
Mar 6 02:54:34.652: INFO: Pod "client-containers-295365a7-6596-445f-ae86-38d6e67b6527" satisfied condition "success or failure" | |
Mar 6 02:54:34.655: INFO: Trying to get logs from node worker02 pod client-containers-295365a7-6596-445f-ae86-38d6e67b6527 container test-container: <nil> | |
STEP: delete the pod | |
Mar 6 02:54:34.712: INFO: Waiting for pod client-containers-295365a7-6596-445f-ae86-38d6e67b6527 to disappear | |
Mar 6 02:54:34.714: INFO: Pod client-containers-295365a7-6596-445f-ae86-38d6e67b6527 no longer exists | |
[AfterEach] [k8s.io] Docker Containers | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 | |
Mar 6 02:54:34.714: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
STEP: Destroying namespace "containers-9856" for this suite. | |
•{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":278,"completed":37,"skipped":699,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]"]} | |
SSSSSSSSSSSSSS | |
------------------------------ | |
[sig-api-machinery] ResourceQuota | |
should verify ResourceQuota with best effort scope. [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
[BeforeEach] [sig-api-machinery] ResourceQuota | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 | |
STEP: Creating a kubernetes client | |
Mar 6 02:54:34.732: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 | |
STEP: Building a namespace api object, basename resourcequota | |
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in resourcequota-4195 | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] should verify ResourceQuota with best effort scope. [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
STEP: Creating a ResourceQuota with best effort scope | |
STEP: Ensuring ResourceQuota status is calculated | |
STEP: Creating a ResourceQuota with not best effort scope | |
STEP: Ensuring ResourceQuota status is calculated | |
STEP: Creating a best-effort pod | |
STEP: Ensuring resource quota with best effort scope captures the pod usage | |
STEP: Ensuring resource quota with not best effort ignored the pod usage | |
STEP: Deleting the pod | |
STEP: Ensuring resource quota status released the pod usage | |
STEP: Creating a not best-effort pod | |
STEP: Ensuring resource quota with not best effort scope captures the pod usage | |
STEP: Ensuring resource quota with best effort scope ignored the pod usage | |
STEP: Deleting the pod | |
STEP: Ensuring resource quota status released the pod usage | |
[AfterEach] [sig-api-machinery] ResourceQuota | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 | |
Mar 6 02:54:50.930: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
STEP: Destroying namespace "resourcequota-4195" for this suite. | |
• [SLOW TEST:16.205 seconds] | |
[sig-api-machinery] ResourceQuota | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 | |
should verify ResourceQuota with best effort scope. [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
------------------------------ | |
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":278,"completed":38,"skipped":713,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]"]} | |
S | |
------------------------------ | |
[sig-storage] EmptyDir volumes | |
should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
[BeforeEach] [sig-storage] EmptyDir volumes | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 | |
STEP: Creating a kubernetes client | |
Mar 6 02:54:50.937: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 | |
STEP: Building a namespace api object, basename emptydir | |
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-8272 | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
STEP: Creating a pod to test emptydir 0666 on tmpfs | |
Mar 6 02:54:51.071: INFO: Waiting up to 5m0s for pod "pod-dda41727-2cda-48e1-b867-13771a35004b" in namespace "emptydir-8272" to be "success or failure" | |
Mar 6 02:54:51.073: INFO: Pod "pod-dda41727-2cda-48e1-b867-13771a35004b": Phase="Pending", Reason="", readiness=false. Elapsed: 1.872287ms | |
Mar 6 02:54:53.075: INFO: Pod "pod-dda41727-2cda-48e1-b867-13771a35004b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004246433s | |
Mar 6 02:54:55.078: INFO: Pod "pod-dda41727-2cda-48e1-b867-13771a35004b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.006462419s | |
STEP: Saw pod success | |
Mar 6 02:54:55.078: INFO: Pod "pod-dda41727-2cda-48e1-b867-13771a35004b" satisfied condition "success or failure" | |
Mar 6 02:54:55.079: INFO: Trying to get logs from node worker02 pod pod-dda41727-2cda-48e1-b867-13771a35004b container test-container: <nil> | |
STEP: delete the pod | |
Mar 6 02:54:55.092: INFO: Waiting for pod pod-dda41727-2cda-48e1-b867-13771a35004b to disappear | |
Mar 6 02:54:55.095: INFO: Pod pod-dda41727-2cda-48e1-b867-13771a35004b no longer exists | |
[AfterEach] [sig-storage] EmptyDir volumes | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 | |
Mar 6 02:54:55.095: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
STEP: Destroying namespace "emptydir-8272" for this suite. | |
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":39,"skipped":714,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]"]} | |
SSSSSSSSS | |
------------------------------ | |
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] | |
should mutate custom resource with different stored version [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 | |
STEP: Creating a kubernetes client | |
Mar 6 02:54:55.105: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 | |
STEP: Building a namespace api object, basename webhook | |
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-2842 | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 | |
STEP: Setting up server cert | |
STEP: Create role binding to let webhook read extension-apiserver-authentication | |
STEP: Deploying the webhook pod | |
STEP: Wait for the deployment to be ready | |
Mar 6 02:54:55.726: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set | |
STEP: Deploying the webhook service | |
STEP: Verifying the service has paired with the endpoint | |
Mar 6 02:54:58.752: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 | |
[It] should mutate custom resource with different stored version [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
Mar 6 02:54:58.754: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 | |
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-6607-crds.webhook.example.com via the AdmissionRegistration API | |
Mar 6 02:55:14.299: INFO: Waiting for webhook configuration to be ready... | |
Mar 6 02:55:24.413: INFO: Waiting for webhook configuration to be ready... | |
Mar 6 02:55:34.514: INFO: Waiting for webhook configuration to be ready... | |
Mar 6 02:55:44.620: INFO: Waiting for webhook configuration to be ready... | |
Mar 6 02:55:54.630: INFO: Waiting for webhook configuration to be ready... | |
Mar 6 02:55:54.630: FAIL: waiting for webhook configuration to be ready | |
Unexpected error: | |
<*errors.errorString | 0xc0000b3950>: { | |
s: "timed out waiting for the condition", | |
} | |
timed out waiting for the condition | |
occurred | |
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 | |
STEP: Collecting events from namespace "webhook-2842". | |
STEP: Found 6 events. | |
Mar 6 02:55:55.149: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for sample-webhook-deployment-5f65f8c764-hh976: {default-scheduler } Scheduled: Successfully assigned webhook-2842/sample-webhook-deployment-5f65f8c764-hh976 to worker02 | |
Mar 6 02:55:55.149: INFO: At 2020-03-06 02:54:55 +0000 UTC - event for sample-webhook-deployment: {deployment-controller } ScalingReplicaSet: Scaled up replica set sample-webhook-deployment-5f65f8c764 to 1 | |
Mar 6 02:55:55.149: INFO: At 2020-03-06 02:54:55 +0000 UTC - event for sample-webhook-deployment-5f65f8c764: {replicaset-controller } SuccessfulCreate: Created pod: sample-webhook-deployment-5f65f8c764-hh976 | |
Mar 6 02:55:55.149: INFO: At 2020-03-06 02:54:56 +0000 UTC - event for sample-webhook-deployment-5f65f8c764-hh976: {kubelet worker02} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.8" already present on machine | |
Mar 6 02:55:55.149: INFO: At 2020-03-06 02:54:56 +0000 UTC - event for sample-webhook-deployment-5f65f8c764-hh976: {kubelet worker02} Created: Created container sample-webhook | |
Mar 6 02:55:55.149: INFO: At 2020-03-06 02:54:56 +0000 UTC - event for sample-webhook-deployment-5f65f8c764-hh976: {kubelet worker02} Started: Started container sample-webhook | |
Mar 6 02:55:55.151: INFO: POD NODE PHASE GRACE CONDITIONS | |
Mar 6 02:55:55.151: INFO: sample-webhook-deployment-5f65f8c764-hh976 worker02 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-06 02:54:55 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-06 02:54:57 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-06 02:54:57 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-06 02:54:55 +0000 UTC }] | |
Mar 6 02:55:55.151: INFO: | |
Mar 6 02:55:55.154: INFO: | |
Logging node info for node master01 | |
Mar 6 02:55:55.156: INFO: Node Info: &Node{ObjectMeta:{master01 /api/v1/nodes/master01 aeae8a5b-4e17-4702-bb02-bcfde6cdb12a 7194 0 2020-03-06 02:29:18 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master01 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"76:15:82:0d:8b:ab"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:192.168.1.247 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{42140479488 0} {<nil>} 41152812Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3823214592 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{37926431477 0} {<nil>} 37926431477 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3718356992 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-03-06 02:53:56 +0000 UTC,LastTransitionTime:2020-03-06 02:29:18 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-03-06 02:53:56 +0000 UTC,LastTransitionTime:2020-03-06 02:29:18 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-03-06 02:53:56 +0000 UTC,LastTransitionTime:2020-03-06 02:29:18 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-03-06 02:53:56 +0000 UTC,LastTransitionTime:2020-03-06 02:30:33 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:192.168.1.247,},NodeAddress{Type:Hostname,Address:master01,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:20200220105402131453637367482142,SystemUUID:195205FE-EE72-4794-8EAA-AC554EFDEC9B,BootID:6a3bf627-7476-4f52-84fa-f3eab6d26427,KernelVersion:3.10.0-1062.12.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.5,KubeletVersion:v1.17.3,KubeProxyVersion:v1.17.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[192.168.1.252/library/k8s-keepalived@sha256:3db0032ef2feef675710595681cf9463470af179cd324c6773e831b6649ef785 192.168.1.252/library/k8s-keepalived:1.3.5],SizeBytes:356553439,},ContainerImage{Names:[sonobuoy/systemd-logs@sha256:fadad24a66ddd544987c38811108a73d1a306dd3b5e3f090b207786f2825ffde sonobuoy/systemd-logs:v0.3],SizeBytes:297365055,},ContainerImage{Names:[192.168.1.252/library/etcd-amd64@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216 192.168.1.252/library/etcd-amd64:3.4.3-0],SizeBytes:288426917,},ContainerImage{Names:[192.168.1.252/library/node@sha256:3226b047a7034918a05c986347c5fb4d2cce6d0844f325851bfba586271ee617 192.168.1.252/library/node:v3.12.0],SizeBytes:257501722,},ContainerImage{Names:[192.168.1.252/library/cni@sha256:dc3bc525f1d3b794db1f2a7ceb7d8b84699d13e1431fbc117063f7e2075ff4b5 192.168.1.252/library/cni:v3.12.0],SizeBytes:206678344,},ContainerImage{Names:[192.168.1.252/library/kube-apiserver@sha256:4ee4113bcce32ae436b15364b09a2439dee15f6a19bbcf43a470d3dc879b0c62 192.168.1.252/library/kube-apiserver:v1.17.3],SizeBytes:170986003,},ContainerImage{Names:[192.168.1.252/library/envoy@sha256:b36ee021fc4d285de7861dbaee01e7437ce1d63814ead6ae3e4dfcad4a951b2e 192.168.1.252/library/envoy:v1.12.2],SizeBytes:170487454,},ContainerImage{Names:[192.168.1.252/library/kube-controller-manager@sha256:8f03391c0d22e3da8d22725178efe3c4338e1920504f7f4eb4a1c7f5f40c4c6e 192.168.1.252/library/kube-controller-manager:v1.17.3],SizeBytes:160918035,},ContainerImage{Names:[192.168.1.252/library/dashboard@sha256:4e0d39dae7e089b77fe2bbcef648f89905716db9c1f0884950bfd42d9f446c29 192.168.1.252/library/dashboard:v2.0.0-rc5],SizeBytes:126359420,},ContainerImage{Names:[192.168.1.252/library/kube-proxy@sha256:8dfd672298f8fbcd37731e0f2e45be87199cee6ee1f482ffec766693dcec0ec4 192.168.1.252/library/kube-proxy:v1.17.3],SizeBytes:115964919,},ContainerImage{Names:[192.168.1.252/library/pod2daemon-flexvol@sha256:2bf967507ad1adb749f3484b5d39e7d7b8700c4a0f836e8093dae5c57a585ccf 192.168.1.252/library/pod2daemon-flexvol:v3.12.0],SizeBytes:111122324,},ContainerImage{Names:[192.168.1.252/library/kube-scheduler@sha256:eea535b58da5b2fb66b32c61c6913d014cf85061a39527ad9bca2fa84b53dc1c 192.168.1.252/library/kube-scheduler:v1.17.3],SizeBytes:94435859,},ContainerImage{Names:[sonobuoy/sonobuoy@sha256:73f9cfe546ac6d0d5b94308293484c29c3b02fc341a975cfca97c80dd8728ed7 sonobuoy/sonobuoy:v0.17.2],SizeBytes:84339798,},ContainerImage{Names:[192.168.1.252/library/k8s-haproxy@sha256:f88bca67a2782e7bd88af168d5c11216c32104d12fe9240fac54a1d3196e3f9c 192.168.1.252/library/k8s-haproxy:2.0.0],SizeBytes:73550856,},ContainerImage{Names:[192.168.1.252/library/kube-controllers@sha256:edf14a5bcc663d2b0013b1830469626b7aa27206cbc7715ed83c042890ca5837 192.168.1.252/library/kube-controllers:v3.12.0],SizeBytes:56567983,},ContainerImage{Names:[192.168.1.252/library/typha@sha256:3baf9aef445a3224160748d6f560426eab798d6c65620020b2466e114bf6805f 192.168.1.252/library/typha:v3.12.0],SizeBytes:56034822,},ContainerImage{Names:[192.168.1.252/library/flannel@sha256:bd76b84c74ad70368a2341c2402841b75950df881388e43fc2aca000c546653a 192.168.1.252/library/flannel:v0.11.0-amd64],SizeBytes:52567296,},ContainerImage{Names:[192.168.1.252/library/ctl@sha256:128e4c95cf92a482496d591c43cad2a6a21fab1f0e8a8f13e8503f1324106dc8 192.168.1.252/library/ctl:v3.12.0],SizeBytes:47895826,},ContainerImage{Names:[192.168.1.252/library/coredns@sha256:608ac7ccba5ce41c6941fca13bc67059c1eef927fd968b554b790e21cc92543c 192.168.1.252/library/coredns:1.6.5],SizeBytes:41578211,},ContainerImage{Names:[192.168.1.252/library/metrics-scraper@sha256:e24a74b3b1cdc84d6285d507a12eb06907fd8c457b3e8ae9baa9418eca43efc4 192.168.1.252/library/metrics-scraper:v1.0.3],SizeBytes:40105664,},ContainerImage{Names:[192.168.1.252/library/metrics-server-amd64@sha256:c9c4e95068b51d6b33a9dccc61875df07dc650abbf4ac1a19d58b4628f89288b 192.168.1.252/library/metrics-server-amd64:v0.3.6],SizeBytes:39944451,},ContainerImage{Names:[192.168.1.252/library/contour@sha256:3e10c69dfeaa830b84a50e6b47ce90e0f5a1aa84daf77f7662313077fa9579cf 192.168.1.252/library/contour:v1.1.0],SizeBytes:35721216,},ContainerImage{Names:[192.168.1.252/library/kuard-amd64@sha256:bd17153e9a3319f401acc7a27759243f37d422c06cbbf01cb3e1f54bbbfe14f4 192.168.1.252/library/kuard-amd64:1],SizeBytes:19745911,},ContainerImage{Names:[192.168.1.252/library/pause@sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d 192.168.1.252/library/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} | |
Mar 6 02:55:55.156: INFO: | |
Logging kubelet events for node master01 | |
Mar 6 02:55:55.160: INFO: | |
Logging pods the kubelet thinks is on node master01 | |
Mar 6 02:55:55.170: INFO: kube-flannel-ds-amd64-6mbnb started at 2020-03-06 02:30:22 +0000 UTC (1+1 container statuses recorded) | |
Mar 6 02:55:55.170: INFO: Init container install-cni ready: true, restart count 0 | |
Mar 6 02:55:55.170: INFO: Container kube-flannel ready: true, restart count 0 | |
Mar 6 02:55:55.170: INFO: kube-proxy-4j8ft started at 2020-03-06 02:30:22 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 02:55:55.170: INFO: Container kube-proxy ready: true, restart count 0 | |
Mar 6 02:55:55.170: INFO: kube-apiserver-master01 started at 2020-03-06 02:30:22 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 02:55:55.170: INFO: Container kube-apiserver ready: true, restart count 0 | |
Mar 6 02:55:55.170: INFO: kube-controller-manager-master01 started at 2020-03-06 02:30:22 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 02:55:55.170: INFO: Container kube-controller-manager ready: true, restart count 1 | |
Mar 6 02:55:55.170: INFO: kube-scheduler-master01 started at 2020-03-06 02:30:22 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 02:55:55.170: INFO: Container kube-scheduler ready: true, restart count 1 | |
Mar 6 02:55:55.170: INFO: sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-drhpn started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded) | |
Mar 6 02:55:55.170: INFO: Container sonobuoy-worker ready: true, restart count 0 | |
Mar 6 02:55:55.170: INFO: Container systemd-logs ready: true, restart count 0 | |
W0306 02:55:55.173076 19 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. | |
Mar 6 02:55:55.188: INFO: | |
Latency metrics for node master01 | |
Mar 6 02:55:55.188: INFO: | |
Logging node info for node master02 | |
Mar 6 02:55:55.190: INFO: Node Info: &Node{ObjectMeta:{master02 /api/v1/nodes/master02 6a0ecb6f-ef31-4754-858b-3eba76999224 7180 0 2020-03-06 02:29:17 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master02 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"82:c1:38:99:3b:39"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:192.168.1.248 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{42140479488 0} {<nil>} 41152812Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3823214592 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{37926431477 0} {<nil>} 37926431477 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3718356992 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-03-06 02:53:53 +0000 UTC,LastTransitionTime:2020-03-06 02:29:16 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-03-06 02:53:53 +0000 UTC,LastTransitionTime:2020-03-06 02:29:16 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-03-06 02:53:53 +0000 UTC,LastTransitionTime:2020-03-06 02:29:16 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-03-06 02:53:53 +0000 UTC,LastTransitionTime:2020-03-06 02:30:10 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:192.168.1.248,},NodeAddress{Type:Hostname,Address:master02,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:20200220105402131453637367482142,SystemUUID:8B2C4639-6D22-4D0D-A03C-F6D7E328F9D5,BootID:efd7329f-ae31-4806-ba13-7fdd5fad57df,KernelVersion:3.10.0-1062.12.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.5,KubeletVersion:v1.17.3,KubeProxyVersion:v1.17.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[sonobuoy/systemd-logs@sha256:fadad24a66ddd544987c38811108a73d1a306dd3b5e3f090b207786f2825ffde sonobuoy/systemd-logs:v0.3],SizeBytes:297365055,},ContainerImage{Names:[192.168.1.252/library/etcd-amd64@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216 192.168.1.252/library/etcd-amd64:3.4.3-0],SizeBytes:288426917,},ContainerImage{Names:[192.168.1.252/library/kube-apiserver@sha256:4ee4113bcce32ae436b15364b09a2439dee15f6a19bbcf43a470d3dc879b0c62 192.168.1.252/library/kube-apiserver:v1.17.3],SizeBytes:170986003,},ContainerImage{Names:[192.168.1.252/library/kube-controller-manager@sha256:8f03391c0d22e3da8d22725178efe3c4338e1920504f7f4eb4a1c7f5f40c4c6e 192.168.1.252/library/kube-controller-manager:v1.17.3],SizeBytes:160918035,},ContainerImage{Names:[192.168.1.252/library/kube-proxy@sha256:8dfd672298f8fbcd37731e0f2e45be87199cee6ee1f482ffec766693dcec0ec4 192.168.1.252/library/kube-proxy:v1.17.3],SizeBytes:115964919,},ContainerImage{Names:[192.168.1.252/library/kube-scheduler@sha256:eea535b58da5b2fb66b32c61c6913d014cf85061a39527ad9bca2fa84b53dc1c 192.168.1.252/library/kube-scheduler:v1.17.3],SizeBytes:94435859,},ContainerImage{Names:[sonobuoy/sonobuoy@sha256:73f9cfe546ac6d0d5b94308293484c29c3b02fc341a975cfca97c80dd8728ed7 sonobuoy/sonobuoy:v0.17.2],SizeBytes:84339798,},ContainerImage{Names:[192.168.1.252/library/k8s-haproxy@sha256:f88bca67a2782e7bd88af168d5c11216c32104d12fe9240fac54a1d3196e3f9c 192.168.1.252/library/k8s-haproxy:2.0.0],SizeBytes:73550856,},ContainerImage{Names:[192.168.1.252/library/flannel@sha256:bd76b84c74ad70368a2341c2402841b75950df881388e43fc2aca000c546653a 192.168.1.252/library/flannel:v0.11.0-amd64],SizeBytes:52567296,},ContainerImage{Names:[192.168.1.252/library/coredns@sha256:608ac7ccba5ce41c6941fca13bc67059c1eef927fd968b554b790e21cc92543c 192.168.1.252/library/coredns:1.6.5],SizeBytes:41578211,},ContainerImage{Names:[192.168.1.252/library/pause@sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d 192.168.1.252/library/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} | |
Mar 6 02:55:55.190: INFO: | |
Logging kubelet events for node master02 | |
Mar 6 02:55:55.194: INFO: | |
Logging pods the kubelet thinks is on node master02 | |
Mar 6 02:55:55.205: INFO: kube-scheduler-master02 started at 2020-03-06 02:29:23 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 02:55:55.205: INFO: Container kube-scheduler ready: true, restart count 1 | |
Mar 6 02:55:55.205: INFO: kube-proxy-scdss started at 2020-03-06 02:30:00 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 02:55:55.205: INFO: Container kube-proxy ready: true, restart count 0 | |
Mar 6 02:55:55.205: INFO: kube-flannel-ds-amd64-vfl78 started at 2020-03-06 02:30:00 +0000 UTC (1+1 container statuses recorded) | |
Mar 6 02:55:55.205: INFO: Init container install-cni ready: true, restart count 0 | |
Mar 6 02:55:55.205: INFO: Container kube-flannel ready: true, restart count 0 | |
Mar 6 02:55:55.205: INFO: coredns-7795996659-phdkc started at 2020-03-06 02:30:13 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 02:55:55.205: INFO: Container coredns ready: true, restart count 0 | |
Mar 6 02:55:55.205: INFO: sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-2zmwm started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded) | |
Mar 6 02:55:55.205: INFO: Container sonobuoy-worker ready: true, restart count 0 | |
Mar 6 02:55:55.205: INFO: Container systemd-logs ready: true, restart count 0 | |
Mar 6 02:55:55.205: INFO: kube-apiserver-master02 started at 2020-03-06 02:29:50 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 02:55:55.205: INFO: Container kube-apiserver ready: true, restart count 0 | |
Mar 6 02:55:55.205: INFO: kube-controller-manager-master02 started at 2020-03-06 02:29:50 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 02:55:55.205: INFO: Container kube-controller-manager ready: true, restart count 1 | |
W0306 02:55:55.210352 19 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. | |
Mar 6 02:55:55.229: INFO: | |
Latency metrics for node master02 | |
Mar 6 02:55:55.229: INFO: | |
Logging node info for node master03 | |
Mar 6 02:55:55.231: INFO: Node Info: &Node{ObjectMeta:{master03 /api/v1/nodes/master03 c508ee4c-fe9d-4c73-a857-e57fba26fa86 7181 0 2020-03-06 02:29:17 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master03 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"4a:aa:08:ea:16:90"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:192.168.1.249 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{42140479488 0} {<nil>} 41152812Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3823226880 0} {<nil>} 3733620Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{37926431477 0} {<nil>} 37926431477 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3718369280 0} {<nil>} 3631220Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-03-06 02:53:54 +0000 UTC,LastTransitionTime:2020-03-06 02:29:17 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-03-06 02:53:54 +0000 UTC,LastTransitionTime:2020-03-06 02:29:17 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-03-06 02:53:54 +0000 UTC,LastTransitionTime:2020-03-06 02:29:17 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-03-06 02:53:54 +0000 UTC,LastTransitionTime:2020-03-06 02:30:10 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:192.168.1.249,},NodeAddress{Type:Hostname,Address:master03,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:20200220105402131453637367482142,SystemUUID:0C8F0A67-EB7E-42DE-9418-0973FE735A08,BootID:05b1fa23-e6be-4032-bc93-8800264dff91,KernelVersion:3.10.0-1062.12.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.5,KubeletVersion:v1.17.3,KubeProxyVersion:v1.17.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[sonobuoy/systemd-logs@sha256:fadad24a66ddd544987c38811108a73d1a306dd3b5e3f090b207786f2825ffde sonobuoy/systemd-logs:v0.3],SizeBytes:297365055,},ContainerImage{Names:[192.168.1.252/library/etcd-amd64@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216 192.168.1.252/library/etcd-amd64:3.4.3-0],SizeBytes:288426917,},ContainerImage{Names:[192.168.1.252/library/kube-apiserver@sha256:4ee4113bcce32ae436b15364b09a2439dee15f6a19bbcf43a470d3dc879b0c62 192.168.1.252/library/kube-apiserver:v1.17.3],SizeBytes:170986003,},ContainerImage{Names:[192.168.1.252/library/kube-controller-manager@sha256:8f03391c0d22e3da8d22725178efe3c4338e1920504f7f4eb4a1c7f5f40c4c6e 192.168.1.252/library/kube-controller-manager:v1.17.3],SizeBytes:160918035,},ContainerImage{Names:[192.168.1.252/library/dashboard@sha256:4e0d39dae7e089b77fe2bbcef648f89905716db9c1f0884950bfd42d9f446c29 192.168.1.252/library/dashboard:v2.0.0-rc5],SizeBytes:126359420,},ContainerImage{Names:[192.168.1.252/library/kube-proxy@sha256:8dfd672298f8fbcd37731e0f2e45be87199cee6ee1f482ffec766693dcec0ec4 192.168.1.252/library/kube-proxy:v1.17.3],SizeBytes:115964919,},ContainerImage{Names:[192.168.1.252/library/kube-scheduler@sha256:eea535b58da5b2fb66b32c61c6913d014cf85061a39527ad9bca2fa84b53dc1c 192.168.1.252/library/kube-scheduler:v1.17.3],SizeBytes:94435859,},ContainerImage{Names:[sonobuoy/sonobuoy@sha256:73f9cfe546ac6d0d5b94308293484c29c3b02fc341a975cfca97c80dd8728ed7 sonobuoy/sonobuoy:v0.17.2],SizeBytes:84339798,},ContainerImage{Names:[192.168.1.252/library/k8s-haproxy@sha256:f88bca67a2782e7bd88af168d5c11216c32104d12fe9240fac54a1d3196e3f9c 192.168.1.252/library/k8s-haproxy:2.0.0],SizeBytes:73550856,},ContainerImage{Names:[192.168.1.252/library/flannel@sha256:bd76b84c74ad70368a2341c2402841b75950df881388e43fc2aca000c546653a 192.168.1.252/library/flannel:v0.11.0-amd64],SizeBytes:52567296,},ContainerImage{Names:[192.168.1.252/library/coredns@sha256:608ac7ccba5ce41c6941fca13bc67059c1eef927fd968b554b790e21cc92543c 192.168.1.252/library/coredns:1.6.5],SizeBytes:41578211,},ContainerImage{Names:[192.168.1.252/library/metrics-scraper@sha256:e24a74b3b1cdc84d6285d507a12eb06907fd8c457b3e8ae9baa9418eca43efc4 192.168.1.252/library/metrics-scraper:v1.0.3],SizeBytes:40105664,},ContainerImage{Names:[192.168.1.252/library/pause@sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d 192.168.1.252/library/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} | |
Mar 6 02:55:55.231: INFO: | |
Logging kubelet events for node master03 | |
Mar 6 02:55:55.235: INFO: | |
Logging pods the kubelet thinks is on node master03 | |
Mar 6 02:55:55.245: INFO: sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-w5psq started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded) | |
Mar 6 02:55:55.245: INFO: Container sonobuoy-worker ready: true, restart count 0 | |
Mar 6 02:55:55.245: INFO: Container systemd-logs ready: true, restart count 0 | |
Mar 6 02:55:55.245: INFO: kube-apiserver-master03 started at 2020-03-06 02:29:24 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 02:55:55.245: INFO: Container kube-apiserver ready: true, restart count 0 | |
Mar 6 02:55:55.245: INFO: kube-scheduler-master03 started at 2020-03-06 02:29:38 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 02:55:55.245: INFO: Container kube-scheduler ready: true, restart count 1 | |
Mar 6 02:55:55.245: INFO: kube-proxy-stbnn started at 2020-03-06 02:30:00 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 02:55:55.245: INFO: Container kube-proxy ready: true, restart count 0 | |
Mar 6 02:55:55.245: INFO: kubernetes-dashboard-6647798d59-j2ms4 started at 2020-03-06 02:30:10 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 02:55:55.245: INFO: Container kubernetes-dashboard ready: true, restart count 0 | |
Mar 6 02:55:55.245: INFO: coredns-7795996659-cmq4d started at 2020-03-06 02:30:13 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 02:55:55.245: INFO: Container coredns ready: true, restart count 0 | |
Mar 6 02:55:55.245: INFO: kube-controller-manager-master03 started at 2020-03-06 02:29:50 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 02:55:55.245: INFO: Container kube-controller-manager ready: true, restart count 1 | |
Mar 6 02:55:55.245: INFO: kube-flannel-ds-amd64-hs69k started at 2020-03-06 02:30:00 +0000 UTC (1+1 container statuses recorded) | |
Mar 6 02:55:55.245: INFO: Init container install-cni ready: true, restart count 0 | |
Mar 6 02:55:55.245: INFO: Container kube-flannel ready: true, restart count 0 | |
Mar 6 02:55:55.245: INFO: dashboard-metrics-scraper-56568cb9d7-d57kl started at 2020-03-06 02:30:10 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 02:55:55.245: INFO: Container dashboard-metrics-scraper ready: true, restart count 0 | |
W0306 02:55:55.248664 19 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. | |
Mar 6 02:55:55.263: INFO: | |
Latency metrics for node master03 | |
Mar 6 02:55:55.263: INFO: | |
Logging node info for node worker01 | |
Mar 6 02:55:55.265: INFO: Node Info: &Node{ObjectMeta:{worker01 /api/v1/nodes/worker01 cf4203bb-1bfa-4b35-991f-935275b6bc46 7294 0 2020-03-06 02:30:30 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:worker01 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"5a:49:f5:5b:74:b3"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:192.168.1.250 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{42140479488 0} {<nil>} 41152812Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3823214592 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{37926431477 0} {<nil>} 37926431477 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3718356992 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-03-06 02:54:19 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-03-06 02:54:19 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-03-06 02:54:19 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-03-06 02:54:19 +0000 UTC,LastTransitionTime:2020-03-06 02:30:45 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:192.168.1.250,},NodeAddress{Type:Hostname,Address:worker01,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:20200220105402131453637367482142,SystemUUID:5CA364AA-0FF8-4B57-BA86-F28699575F0D,BootID:c85ad0c4-ebcf-4d01-97f0-a36c1cfc50be,KernelVersion:3.10.0-1062.12.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.5,KubeletVersion:v1.17.3,KubeProxyVersion:v1.17.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[sonobuoy/systemd-logs@sha256:fadad24a66ddd544987c38811108a73d1a306dd3b5e3f090b207786f2825ffde sonobuoy/systemd-logs:v0.3],SizeBytes:297365055,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[192.168.1.252/library/kube-proxy@sha256:8dfd672298f8fbcd37731e0f2e45be87199cee6ee1f482ffec766693dcec0ec4 192.168.1.252/library/kube-proxy:v1.17.3],SizeBytes:115964919,},ContainerImage{Names:[sonobuoy/sonobuoy@sha256:73f9cfe546ac6d0d5b94308293484c29c3b02fc341a975cfca97c80dd8728ed7 sonobuoy/sonobuoy:v0.17.2],SizeBytes:84339798,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5 gcr.io/kubernetes-e2e-test-images/agnhost:2.8],SizeBytes:52800335,},ContainerImage{Names:[192.168.1.252/library/flannel@sha256:bd76b84c74ad70368a2341c2402841b75950df881388e43fc2aca000c546653a 192.168.1.252/library/flannel:v0.11.0-amd64],SizeBytes:52567296,},ContainerImage{Names:[192.168.1.252/library/metrics-server-amd64@sha256:c9c4e95068b51d6b33a9dccc61875df07dc650abbf4ac1a19d58b4628f89288b 192.168.1.252/library/metrics-server-amd64:v0.3.6],SizeBytes:39944451,},ContainerImage{Names:[192.168.1.252/library/kuard-amd64@sha256:bd17153e9a3319f401acc7a27759243f37d422c06cbbf01cb3e1f54bbbfe14f4 192.168.1.252/library/kuard-amd64:1],SizeBytes:19745911,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[192.168.1.252/library/pause@sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d 192.168.1.252/library/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} | |
Mar 6 02:55:55.265: INFO: | |
Logging kubelet events for node worker01 | |
Mar 6 02:55:55.269: INFO: | |
Logging pods the kubelet thinks is on node worker01 | |
Mar 6 02:55:55.280: INFO: sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-2bz8g started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded) | |
Mar 6 02:55:55.280: INFO: Container sonobuoy-worker ready: true, restart count 0 | |
Mar 6 02:55:55.280: INFO: Container systemd-logs ready: true, restart count 0 | |
Mar 6 02:55:55.280: INFO: kube-proxy-kcb8f started at 2020-03-06 02:30:30 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 02:55:55.280: INFO: Container kube-proxy ready: true, restart count 0 | |
Mar 6 02:55:55.280: INFO: envoy-lvmcb started at 2020-03-06 02:30:45 +0000 UTC (1+1 container statuses recorded) | |
Mar 6 02:55:55.280: INFO: Init container envoy-initconfig ready: false, restart count 0 | |
Mar 6 02:55:55.280: INFO: Container envoy ready: false, restart count 0 | |
Mar 6 02:55:55.280: INFO: contour-certgen-82k46 started at 2020-03-06 02:30:46 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 02:55:55.280: INFO: Container contour ready: false, restart count 0 | |
Mar 6 02:55:55.280: INFO: kuard-678c676f5d-m29b6 started at 2020-03-06 02:30:49 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 02:55:55.280: INFO: Container kuard ready: true, restart count 0 | |
Mar 6 02:55:55.280: INFO: kuard-678c676f5d-tzsnn started at 2020-03-06 02:30:51 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 02:55:55.280: INFO: Container kuard ready: true, restart count 0 | |
Mar 6 02:55:55.280: INFO: contour-54748c65f5-gk5sz started at 2020-03-06 02:30:51 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 02:55:55.280: INFO: Container contour ready: false, restart count 0 | |
Mar 6 02:55:55.280: INFO: kube-flannel-ds-amd64-xxhz9 started at 2020-03-06 02:30:30 +0000 UTC (1+1 container statuses recorded) | |
Mar 6 02:55:55.280: INFO: Init container install-cni ready: true, restart count 0 | |
Mar 6 02:55:55.280: INFO: Container kube-flannel ready: true, restart count 1 | |
Mar 6 02:55:55.280: INFO: contour-54748c65f5-jl5wz started at 2020-03-06 02:30:46 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 02:55:55.280: INFO: Container contour ready: false, restart count 0 | |
Mar 6 02:55:55.280: INFO: metrics-server-78799bf646-xrsnn started at 2020-03-06 02:30:51 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 02:55:55.280: INFO: Container metrics-server ready: true, restart count 0 | |
Mar 6 02:55:55.280: INFO: kuard-678c676f5d-vsn86 started at 2020-03-06 02:30:51 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 02:55:55.280: INFO: Container kuard ready: true, restart count 0 | |
W0306 02:55:55.282484 19 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. | |
Mar 6 02:55:55.301: INFO: | |
Latency metrics for node worker01 | |
Mar 6 02:55:55.301: INFO: | |
Logging node info for node worker02 | |
Mar 6 02:55:55.303: INFO: Node Info: &Node{ObjectMeta:{worker02 /api/v1/nodes/worker02 f0994ba1-7e4e-4cc8-b3c8-25d34b25d9ce 7669 0 2020-03-06 02:30:30 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:worker02 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"1a:75:0a:e8:cc:76"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:192.168.1.251 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{42140479488 0} {<nil>} 41152812Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3823214592 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{37926431477 0} {<nil>} 37926431477 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3718356992 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-03-06 02:55:18 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-03-06 02:55:18 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-03-06 02:55:18 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-03-06 02:55:18 +0000 UTC,LastTransitionTime:2020-03-06 02:30:55 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:192.168.1.251,},NodeAddress{Type:Hostname,Address:worker02,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:20200220105402131453637367482142,SystemUUID:EDBF7E33-228B-4233-93CF-7850B5A311E4,BootID:bd6a4f0f-5ddb-4585-83df-253b9292b617,KernelVersion:3.10.0-1062.12.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.5,KubeletVersion:v1.17.3,KubeProxyVersion:v1.17.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/google-containers/conformance@sha256:502434491cbc3fac5d9a606a879e554cf881b2ba5b688bed25f2c33d3ff1c777 gcr.io/google-containers/conformance:v1.17.3],SizeBytes:575831882,},ContainerImage{Names:[sonobuoy/systemd-logs@sha256:fadad24a66ddd544987c38811108a73d1a306dd3b5e3f090b207786f2825ffde sonobuoy/systemd-logs:v0.3],SizeBytes:297365055,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[192.168.1.252/library/kube-proxy@sha256:8dfd672298f8fbcd37731e0f2e45be87199cee6ee1f482ffec766693dcec0ec4 192.168.1.252/library/kube-proxy:v1.17.3],SizeBytes:115964919,},ContainerImage{Names:[sonobuoy/sonobuoy@sha256:73f9cfe546ac6d0d5b94308293484c29c3b02fc341a975cfca97c80dd8728ed7 sonobuoy/sonobuoy:v0.17.2],SizeBytes:84339798,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5 gcr.io/kubernetes-e2e-test-images/agnhost:2.8],SizeBytes:52800335,},ContainerImage{Names:[192.168.1.252/library/flannel@sha256:bd76b84c74ad70368a2341c2402841b75950df881388e43fc2aca000c546653a 192.168.1.252/library/flannel:v0.11.0-amd64],SizeBytes:52567296,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0],SizeBytes:4732240,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0],SizeBytes:1450451,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[192.168.1.252/library/pause@sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea 192.168.1.252/library/pause:3.1 k8s.gcr.io/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} | |
Mar 6 02:55:55.303: INFO: | |
Logging kubelet events for node worker02 | |
Mar 6 02:55:55.307: INFO: | |
Logging pods the kubelet thinks is on node worker02 | |
Mar 6 02:55:55.311: INFO: sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-bpjtd started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded) | |
Mar 6 02:55:55.311: INFO: Container sonobuoy-worker ready: true, restart count 0 | |
Mar 6 02:55:55.311: INFO: Container systemd-logs ready: true, restart count 0 | |
Mar 6 02:55:55.311: INFO: sample-webhook-deployment-5f65f8c764-hh976 started at 2020-03-06 02:54:55 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 02:55:55.311: INFO: Container sample-webhook ready: true, restart count 0 | |
Mar 6 02:55:55.311: INFO: kube-flannel-ds-amd64-ztfzf started at 2020-03-06 02:30:30 +0000 UTC (1+1 container statuses recorded) | |
Mar 6 02:55:55.311: INFO: Init container install-cni ready: true, restart count 0 | |
Mar 6 02:55:55.311: INFO: Container kube-flannel ready: true, restart count 0 | |
Mar 6 02:55:55.311: INFO: envoy-wgz76 started at 2020-03-06 02:30:55 +0000 UTC (1+1 container statuses recorded) | |
Mar 6 02:55:55.311: INFO: Init container envoy-initconfig ready: false, restart count 0 | |
Mar 6 02:55:55.311: INFO: Container envoy ready: false, restart count 0 | |
Mar 6 02:55:55.311: INFO: sonobuoy-e2e-job-67137ff64ac145d3 started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded) | |
Mar 6 02:55:55.311: INFO: Container e2e ready: true, restart count 0 | |
Mar 6 02:55:55.311: INFO: Container sonobuoy-worker ready: true, restart count 0 | |
Mar 6 02:55:55.311: INFO: kube-proxy-5xxdb started at 2020-03-06 02:30:30 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 02:55:55.311: INFO: Container kube-proxy ready: true, restart count 1 | |
Mar 6 02:55:55.311: INFO: sonobuoy started at 2020-03-06 02:38:02 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 02:55:55.311: INFO: Container kube-sonobuoy ready: true, restart count 0 | |
W0306 02:55:55.316605 19 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. | |
Mar 6 02:55:55.337: INFO: | |
Latency metrics for node worker02 | |
Mar 6 02:55:55.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
STEP: Destroying namespace "webhook-2842" for this suite. | |
STEP: Destroying namespace "webhook-2842-markers" for this suite. | |
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 | |
• Failure [60.324 seconds] | |
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 | |
should mutate custom resource with different stored version [Conformance] [It] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
Mar 6 02:55:54.630: waiting for webhook configuration to be ready | |
Unexpected error: | |
<*errors.errorString | 0xc0000b3950>: { | |
s: "timed out waiting for the condition", | |
} | |
timed out waiting for the condition | |
occurred | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:1865 | |
------------------------------ | |
{"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":278,"completed":39,"skipped":723,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]"]} | |
SSSSSSSSSS | |
------------------------------ | |
[sig-storage] Subpath Atomic writer volumes | |
should support subpaths with downward pod [LinuxOnly] [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
[BeforeEach] [sig-storage] Subpath | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 | |
STEP: Creating a kubernetes client | |
Mar 6 02:55:55.429: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 | |
STEP: Building a namespace api object, basename subpath | |
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in subpath-3368 | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] Atomic writer volumes | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 | |
STEP: Setting up data | |
[It] should support subpaths with downward pod [LinuxOnly] [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
STEP: Creating pod pod-subpath-test-downwardapi-krjn | |
STEP: Creating a pod to test atomic-volume-subpath | |
Mar 6 02:55:55.594: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-krjn" in namespace "subpath-3368" to be "success or failure" | |
Mar 6 02:55:55.598: INFO: Pod "pod-subpath-test-downwardapi-krjn": Phase="Pending", Reason="", readiness=false. Elapsed: 3.546284ms | |
Mar 6 02:55:57.600: INFO: Pod "pod-subpath-test-downwardapi-krjn": Phase="Running", Reason="", readiness=true. Elapsed: 2.006362591s | |
Mar 6 02:55:59.603: INFO: Pod "pod-subpath-test-downwardapi-krjn": Phase="Running", Reason="", readiness=true. Elapsed: 4.008996699s | |
Mar 6 02:56:01.605: INFO: Pod "pod-subpath-test-downwardapi-krjn": Phase="Running", Reason="", readiness=true. Elapsed: 6.011343078s | |
Mar 6 02:56:03.610: INFO: Pod "pod-subpath-test-downwardapi-krjn": Phase="Running", Reason="", readiness=true. Elapsed: 8.015840701s | |
Mar 6 02:56:05.613: INFO: Pod "pod-subpath-test-downwardapi-krjn": Phase="Running", Reason="", readiness=true. Elapsed: 10.018818621s | |
Mar 6 02:56:07.617: INFO: Pod "pod-subpath-test-downwardapi-krjn": Phase="Running", Reason="", readiness=true. Elapsed: 12.023012986s | |
Mar 6 02:56:09.620: INFO: Pod "pod-subpath-test-downwardapi-krjn": Phase="Running", Reason="", readiness=true. Elapsed: 14.02574638s | |
Mar 6 02:56:11.622: INFO: Pod "pod-subpath-test-downwardapi-krjn": Phase="Running", Reason="", readiness=true. Elapsed: 16.028159563s | |
Mar 6 02:56:13.624: INFO: Pod "pod-subpath-test-downwardapi-krjn": Phase="Running", Reason="", readiness=true. Elapsed: 18.030421715s | |
Mar 6 02:56:15.627: INFO: Pod "pod-subpath-test-downwardapi-krjn": Phase="Running", Reason="", readiness=true. Elapsed: 20.033125167s | |
Mar 6 02:56:17.630: INFO: Pod "pod-subpath-test-downwardapi-krjn": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.035580817s | |
STEP: Saw pod success | |
Mar 6 02:56:17.630: INFO: Pod "pod-subpath-test-downwardapi-krjn" satisfied condition "success or failure" | |
Mar 6 02:56:17.632: INFO: Trying to get logs from node worker02 pod pod-subpath-test-downwardapi-krjn container test-container-subpath-downwardapi-krjn: <nil> | |
STEP: delete the pod | |
Mar 6 02:56:17.647: INFO: Waiting for pod pod-subpath-test-downwardapi-krjn to disappear | |
Mar 6 02:56:17.648: INFO: Pod pod-subpath-test-downwardapi-krjn no longer exists | |
STEP: Deleting pod pod-subpath-test-downwardapi-krjn | |
Mar 6 02:56:17.648: INFO: Deleting pod "pod-subpath-test-downwardapi-krjn" in namespace "subpath-3368" | |
[AfterEach] [sig-storage] Subpath | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 | |
Mar 6 02:56:17.652: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
STEP: Destroying namespace "subpath-3368" for this suite. | |
• [SLOW TEST:22.231 seconds] | |
[sig-storage] Subpath | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 | |
Atomic writer volumes | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 | |
should support subpaths with downward pod [LinuxOnly] [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
------------------------------ | |
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":278,"completed":40,"skipped":733,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]"]} | |
SSSSSS | |
------------------------------ | |
[sig-storage] Projected configMap | |
should be consumable from pods in volume [NodeConformance] [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
[BeforeEach] [sig-storage] Projected configMap | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 | |
STEP: Creating a kubernetes client | |
Mar 6 02:56:17.660: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 | |
STEP: Building a namespace api object, basename projected | |
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-6101 | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] should be consumable from pods in volume [NodeConformance] [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
STEP: Creating configMap with name projected-configmap-test-volume-3d6d3157-1cd5-4e9c-a543-1b46e030bfb8 | |
STEP: Creating a pod to test consume configMaps | |
Mar 6 02:56:17.800: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-2a168c44-b737-4aa3-8346-6608204dbeee" in namespace "projected-6101" to be "success or failure" | |
Mar 6 02:56:17.804: INFO: Pod "pod-projected-configmaps-2a168c44-b737-4aa3-8346-6608204dbeee": Phase="Pending", Reason="", readiness=false. Elapsed: 4.198548ms | |
Mar 6 02:56:19.806: INFO: Pod "pod-projected-configmaps-2a168c44-b737-4aa3-8346-6608204dbeee": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006455227s | |
STEP: Saw pod success | |
Mar 6 02:56:19.806: INFO: Pod "pod-projected-configmaps-2a168c44-b737-4aa3-8346-6608204dbeee" satisfied condition "success or failure" | |
Mar 6 02:56:19.808: INFO: Trying to get logs from node worker02 pod pod-projected-configmaps-2a168c44-b737-4aa3-8346-6608204dbeee container projected-configmap-volume-test: <nil> | |
STEP: delete the pod | |
Mar 6 02:56:19.822: INFO: Waiting for pod pod-projected-configmaps-2a168c44-b737-4aa3-8346-6608204dbeee to disappear | |
Mar 6 02:56:19.829: INFO: Pod pod-projected-configmaps-2a168c44-b737-4aa3-8346-6608204dbeee no longer exists | |
[AfterEach] [sig-storage] Projected configMap | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 | |
Mar 6 02:56:19.829: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
STEP: Destroying namespace "projected-6101" for this suite. | |
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":41,"skipped":739,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]"]} | |
------------------------------ | |
[k8s.io] Security Context When creating a pod with readOnlyRootFilesystem | |
should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
[BeforeEach] [k8s.io] Security Context | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 | |
STEP: Creating a kubernetes client | |
Mar 6 02:56:19.835: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 | |
STEP: Building a namespace api object, basename security-context-test | |
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in security-context-test-6036 | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] [k8s.io] Security Context | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 | |
[It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
Mar 6 02:56:19.966: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-f89c3abc-6299-43a4-815a-8b415a29629b" in namespace "security-context-test-6036" to be "success or failure" | |
Mar 6 02:56:19.968: INFO: Pod "busybox-readonly-false-f89c3abc-6299-43a4-815a-8b415a29629b": Phase="Pending", Reason="", readiness=false. Elapsed: 1.891679ms | |
Mar 6 02:56:21.970: INFO: Pod "busybox-readonly-false-f89c3abc-6299-43a4-815a-8b415a29629b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.0043452s | |
Mar 6 02:56:21.970: INFO: Pod "busybox-readonly-false-f89c3abc-6299-43a4-815a-8b415a29629b" satisfied condition "success or failure" | |
[AfterEach] [k8s.io] Security Context | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 | |
Mar 6 02:56:21.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
STEP: Destroying namespace "security-context-test-6036" for this suite. | |
•{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":278,"completed":42,"skipped":739,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]"]} | |
SSSSSSSSSSSSSSSSSSSSSSS | |
------------------------------ | |
[sig-storage] Downward API volume | |
should provide podname only [NodeConformance] [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
[BeforeEach] [sig-storage] Downward API volume | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 | |
STEP: Creating a kubernetes client | |
Mar 6 02:56:21.977: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 | |
STEP: Building a namespace api object, basename downward-api | |
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-8337 | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] [sig-storage] Downward API volume | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 | |
[It] should provide podname only [NodeConformance] [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
STEP: Creating a pod to test downward API volume plugin | |
Mar 6 02:56:22.122: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3dbec8bf-2c79-473f-8ae8-dcd0b31493f6" in namespace "downward-api-8337" to be "success or failure" | |
Mar 6 02:56:22.125: INFO: Pod "downwardapi-volume-3dbec8bf-2c79-473f-8ae8-dcd0b31493f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.984432ms | |
Mar 6 02:56:24.127: INFO: Pod "downwardapi-volume-3dbec8bf-2c79-473f-8ae8-dcd0b31493f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005300546s | |
STEP: Saw pod success | |
Mar 6 02:56:24.127: INFO: Pod "downwardapi-volume-3dbec8bf-2c79-473f-8ae8-dcd0b31493f6" satisfied condition "success or failure" | |
Mar 6 02:56:24.129: INFO: Trying to get logs from node worker02 pod downwardapi-volume-3dbec8bf-2c79-473f-8ae8-dcd0b31493f6 container client-container: <nil> | |
STEP: delete the pod | |
Mar 6 02:56:24.143: INFO: Waiting for pod downwardapi-volume-3dbec8bf-2c79-473f-8ae8-dcd0b31493f6 to disappear | |
Mar 6 02:56:24.144: INFO: Pod downwardapi-volume-3dbec8bf-2c79-473f-8ae8-dcd0b31493f6 no longer exists | |
[AfterEach] [sig-storage] Downward API volume | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 | |
Mar 6 02:56:24.144: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
STEP: Destroying namespace "downward-api-8337" for this suite. | |
•{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":278,"completed":43,"skipped":762,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]"]} | |
SS | |
------------------------------ | |
[sig-storage] Downward API volume | |
should update labels on modification [NodeConformance] [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
[BeforeEach] [sig-storage] Downward API volume | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 | |
STEP: Creating a kubernetes client | |
Mar 6 02:56:24.151: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 | |
STEP: Building a namespace api object, basename downward-api | |
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-2374 | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] [sig-storage] Downward API volume | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 | |
[It] should update labels on modification [NodeConformance] [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
STEP: Creating the pod | |
Mar 6 02:56:26.804: INFO: Successfully updated pod "labelsupdatefe2f9bd0-61d8-480f-8b33-29239b58093b" | |
[AfterEach] [sig-storage] Downward API volume | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 | |
Mar 6 02:56:28.814: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
STEP: Destroying namespace "downward-api-2374" for this suite. | |
•{"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":44,"skipped":764,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]"]} | |
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS | |
------------------------------ | |
[sig-cli] Kubectl client Kubectl patch | |
should add annotations for pods in rc [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
[BeforeEach] [sig-cli] Kubectl client | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 | |
STEP: Creating a kubernetes client | |
Mar 6 02:56:28.832: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 | |
STEP: Building a namespace api object, basename kubectl | |
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-8121 | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] [sig-cli] Kubectl client | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 | |
[It] should add annotations for pods in rc [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
STEP: creating Agnhost RC | |
Mar 6 02:56:28.984: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 create -f - --namespace=kubectl-8121' | |
Mar 6 02:56:29.186: INFO: stderr: "" | |
Mar 6 02:56:29.186: INFO: stdout: "replicationcontroller/agnhost-master created\n" | |
STEP: Waiting for Agnhost master to start. | |
Mar 6 02:56:30.189: INFO: Selector matched 1 pods for map[app:agnhost] | |
Mar 6 02:56:30.189: INFO: Found 0 / 1 | |
Mar 6 02:56:31.189: INFO: Selector matched 1 pods for map[app:agnhost] | |
Mar 6 02:56:31.189: INFO: Found 1 / 1 | |
Mar 6 02:56:31.189: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 | |
STEP: patching all pods | |
Mar 6 02:56:31.191: INFO: Selector matched 1 pods for map[app:agnhost] | |
Mar 6 02:56:31.191: INFO: ForEach: Found 1 pods from the filter. Now looping through them. | |
Mar 6 02:56:31.191: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 patch pod agnhost-master-95zcv --namespace=kubectl-8121 -p {"metadata":{"annotations":{"x":"y"}}}' | |
Mar 6 02:56:31.269: INFO: stderr: "" | |
Mar 6 02:56:31.269: INFO: stdout: "pod/agnhost-master-95zcv patched\n" | |
STEP: checking annotations | |
Mar 6 02:56:31.271: INFO: Selector matched 1 pods for map[app:agnhost] | |
Mar 6 02:56:31.271: INFO: ForEach: Found 1 pods from the filter. Now looping through them. | |
[AfterEach] [sig-cli] Kubectl client | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 | |
Mar 6 02:56:31.271: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
STEP: Destroying namespace "kubectl-8121" for this suite. | |
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":278,"completed":45,"skipped":851,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]"]} | |
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS | |
------------------------------ | |
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] | |
should mutate pod and apply defaults after mutation [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 | |
STEP: Creating a kubernetes client | |
Mar 6 02:56:31.277: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 | |
STEP: Building a namespace api object, basename webhook | |
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-4758 | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 | |
STEP: Setting up server cert | |
STEP: Create role binding to let webhook read extension-apiserver-authentication | |
STEP: Deploying the webhook pod | |
STEP: Wait for the deployment to be ready | |
Mar 6 02:56:31.875: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set | |
STEP: Deploying the webhook service | |
STEP: Verifying the service has paired with the endpoint | |
Mar 6 02:56:34.912: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 | |
[It] should mutate pod and apply defaults after mutation [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
STEP: Registering the mutating pod webhook via the AdmissionRegistration API | |
Mar 6 02:56:44.933: INFO: Waiting for webhook configuration to be ready... | |
Mar 6 02:56:55.045: INFO: Waiting for webhook configuration to be ready... | |
Mar 6 02:57:05.144: INFO: Waiting for webhook configuration to be ready... | |
Mar 6 02:57:15.249: INFO: Waiting for webhook configuration to be ready... | |
Mar 6 02:57:25.259: INFO: Waiting for webhook configuration to be ready... | |
Mar 6 02:57:25.259: FAIL: waiting for webhook configuration to be ready | |
Unexpected error: | |
<*errors.errorString | 0xc0000b3950>: { | |
s: "timed out waiting for the condition", | |
} | |
timed out waiting for the condition | |
occurred | |
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 | |
STEP: Collecting events from namespace "webhook-4758". | |
STEP: Found 6 events. | |
Mar 6 02:57:25.262: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for sample-webhook-deployment-5f65f8c764-d7djs: {default-scheduler } Scheduled: Successfully assigned webhook-4758/sample-webhook-deployment-5f65f8c764-d7djs to worker02 | |
Mar 6 02:57:25.262: INFO: At 2020-03-06 02:56:31 +0000 UTC - event for sample-webhook-deployment: {deployment-controller } ScalingReplicaSet: Scaled up replica set sample-webhook-deployment-5f65f8c764 to 1 | |
Mar 6 02:57:25.262: INFO: At 2020-03-06 02:56:31 +0000 UTC - event for sample-webhook-deployment-5f65f8c764: {replicaset-controller } SuccessfulCreate: Created pod: sample-webhook-deployment-5f65f8c764-d7djs | |
Mar 6 02:57:25.262: INFO: At 2020-03-06 02:56:32 +0000 UTC - event for sample-webhook-deployment-5f65f8c764-d7djs: {kubelet worker02} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.8" already present on machine | |
Mar 6 02:57:25.262: INFO: At 2020-03-06 02:56:32 +0000 UTC - event for sample-webhook-deployment-5f65f8c764-d7djs: {kubelet worker02} Created: Created container sample-webhook | |
Mar 6 02:57:25.262: INFO: At 2020-03-06 02:56:32 +0000 UTC - event for sample-webhook-deployment-5f65f8c764-d7djs: {kubelet worker02} Started: Started container sample-webhook | |
Mar 6 02:57:25.265: INFO: POD NODE PHASE GRACE CONDITIONS | |
Mar 6 02:57:25.265: INFO: sample-webhook-deployment-5f65f8c764-d7djs worker02 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-06 02:56:31 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-06 02:56:33 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-06 02:56:33 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-06 02:56:31 +0000 UTC }] | |
Mar 6 02:57:25.265: INFO: | |
Mar 6 02:57:25.268: INFO: | |
Logging node info for node master01 | |
Mar 6 02:57:25.269: INFO: Node Info: &Node{ObjectMeta:{master01 /api/v1/nodes/master01 aeae8a5b-4e17-4702-bb02-bcfde6cdb12a 7194 0 2020-03-06 02:29:18 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master01 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"76:15:82:0d:8b:ab"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:192.168.1.247 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{42140479488 0} {<nil>} 41152812Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3823214592 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{37926431477 0} {<nil>} 37926431477 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3718356992 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-03-06 02:53:56 +0000 UTC,LastTransitionTime:2020-03-06 02:29:18 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-03-06 02:53:56 +0000 UTC,LastTransitionTime:2020-03-06 02:29:18 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-03-06 02:53:56 +0000 UTC,LastTransitionTime:2020-03-06 02:29:18 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-03-06 02:53:56 +0000 UTC,LastTransitionTime:2020-03-06 02:30:33 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:192.168.1.247,},NodeAddress{Type:Hostname,Address:master01,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:20200220105402131453637367482142,SystemUUID:195205FE-EE72-4794-8EAA-AC554EFDEC9B,BootID:6a3bf627-7476-4f52-84fa-f3eab6d26427,KernelVersion:3.10.0-1062.12.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.5,KubeletVersion:v1.17.3,KubeProxyVersion:v1.17.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[192.168.1.252/library/k8s-keepalived@sha256:3db0032ef2feef675710595681cf9463470af179cd324c6773e831b6649ef785 192.168.1.252/library/k8s-keepalived:1.3.5],SizeBytes:356553439,},ContainerImage{Names:[sonobuoy/systemd-logs@sha256:fadad24a66ddd544987c38811108a73d1a306dd3b5e3f090b207786f2825ffde sonobuoy/systemd-logs:v0.3],SizeBytes:297365055,},ContainerImage{Names:[192.168.1.252/library/etcd-amd64@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216 192.168.1.252/library/etcd-amd64:3.4.3-0],SizeBytes:288426917,},ContainerImage{Names:[192.168.1.252/library/node@sha256:3226b047a7034918a05c986347c5fb4d2cce6d0844f325851bfba586271ee617 192.168.1.252/library/node:v3.12.0],SizeBytes:257501722,},ContainerImage{Names:[192.168.1.252/library/cni@sha256:dc3bc525f1d3b794db1f2a7ceb7d8b84699d13e1431fbc117063f7e2075ff4b5 192.168.1.252/library/cni:v3.12.0],SizeBytes:206678344,},ContainerImage{Names:[192.168.1.252/library/kube-apiserver@sha256:4ee4113bcce32ae436b15364b09a2439dee15f6a19bbcf43a470d3dc879b0c62 192.168.1.252/library/kube-apiserver:v1.17.3],SizeBytes:170986003,},ContainerImage{Names:[192.168.1.252/library/envoy@sha256:b36ee021fc4d285de7861dbaee01e7437ce1d63814ead6ae3e4dfcad4a951b2e 192.168.1.252/library/envoy:v1.12.2],SizeBytes:170487454,},ContainerImage{Names:[192.168.1.252/library/kube-controller-manager@sha256:8f03391c0d22e3da8d22725178efe3c4338e1920504f7f4eb4a1c7f5f40c4c6e 192.168.1.252/library/kube-controller-manager:v1.17.3],SizeBytes:160918035,},ContainerImage{Names:[192.168.1.252/library/dashboard@sha256:4e0d39dae7e089b77fe2bbcef648f89905716db9c1f0884950bfd42d9f446c29 192.168.1.252/library/dashboard:v2.0.0-rc5],SizeBytes:126359420,},ContainerImage{Names:[192.168.1.252/library/kube-proxy@sha256:8dfd672298f8fbcd37731e0f2e45be87199cee6ee1f482ffec766693dcec0ec4 192.168.1.252/library/kube-proxy:v1.17.3],SizeBytes:115964919,},ContainerImage{Names:[192.168.1.252/library/pod2daemon-flexvol@sha256:2bf967507ad1adb749f3484b5d39e7d7b8700c4a0f836e8093dae5c57a585ccf 192.168.1.252/library/pod2daemon-flexvol:v3.12.0],SizeBytes:111122324,},ContainerImage{Names:[192.168.1.252/library/kube-scheduler@sha256:eea535b58da5b2fb66b32c61c6913d014cf85061a39527ad9bca2fa84b53dc1c 192.168.1.252/library/kube-scheduler:v1.17.3],SizeBytes:94435859,},ContainerImage{Names:[sonobuoy/sonobuoy@sha256:73f9cfe546ac6d0d5b94308293484c29c3b02fc341a975cfca97c80dd8728ed7 sonobuoy/sonobuoy:v0.17.2],SizeBytes:84339798,},ContainerImage{Names:[192.168.1.252/library/k8s-haproxy@sha256:f88bca67a2782e7bd88af168d5c11216c32104d12fe9240fac54a1d3196e3f9c 192.168.1.252/library/k8s-haproxy:2.0.0],SizeBytes:73550856,},ContainerImage{Names:[192.168.1.252/library/kube-controllers@sha256:edf14a5bcc663d2b0013b1830469626b7aa27206cbc7715ed83c042890ca5837 192.168.1.252/library/kube-controllers:v3.12.0],SizeBytes:56567983,},ContainerImage{Names:[192.168.1.252/library/typha@sha256:3baf9aef445a3224160748d6f560426eab798d6c65620020b2466e114bf6805f 192.168.1.252/library/typha:v3.12.0],SizeBytes:56034822,},ContainerImage{Names:[192.168.1.252/library/flannel@sha256:bd76b84c74ad70368a2341c2402841b75950df881388e43fc2aca000c546653a 192.168.1.252/library/flannel:v0.11.0-amd64],SizeBytes:52567296,},ContainerImage{Names:[192.168.1.252/library/ctl@sha256:128e4c95cf92a482496d591c43cad2a6a21fab1f0e8a8f13e8503f1324106dc8 192.168.1.252/library/ctl:v3.12.0],SizeBytes:47895826,},ContainerImage{Names:[192.168.1.252/library/coredns@sha256:608ac7ccba5ce41c6941fca13bc67059c1eef927fd968b554b790e21cc92543c 192.168.1.252/library/coredns:1.6.5],SizeBytes:41578211,},ContainerImage{Names:[192.168.1.252/library/metrics-scraper@sha256:e24a74b3b1cdc84d6285d507a12eb06907fd8c457b3e8ae9baa9418eca43efc4 192.168.1.252/library/metrics-scraper:v1.0.3],SizeBytes:40105664,},ContainerImage{Names:[192.168.1.252/library/metrics-server-amd64@sha256:c9c4e95068b51d6b33a9dccc61875df07dc650abbf4ac1a19d58b4628f89288b 192.168.1.252/library/metrics-server-amd64:v0.3.6],SizeBytes:39944451,},ContainerImage{Names:[192.168.1.252/library/contour@sha256:3e10c69dfeaa830b84a50e6b47ce90e0f5a1aa84daf77f7662313077fa9579cf 192.168.1.252/library/contour:v1.1.0],SizeBytes:35721216,},ContainerImage{Names:[192.168.1.252/library/kuard-amd64@sha256:bd17153e9a3319f401acc7a27759243f37d422c06cbbf01cb3e1f54bbbfe14f4 192.168.1.252/library/kuard-amd64:1],SizeBytes:19745911,},ContainerImage{Names:[192.168.1.252/library/pause@sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d 192.168.1.252/library/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} | |
Mar 6 02:57:25.270: INFO: | |
Logging kubelet events for node master01 | |
Mar 6 02:57:25.274: INFO: | |
Logging pods the kubelet thinks is on node master01 | |
Mar 6 02:57:25.283: INFO: kube-flannel-ds-amd64-6mbnb started at 2020-03-06 02:30:22 +0000 UTC (1+1 container statuses recorded) | |
Mar 6 02:57:25.283: INFO: Init container install-cni ready: true, restart count 0 | |
Mar 6 02:57:25.283: INFO: Container kube-flannel ready: true, restart count 0 | |
Mar 6 02:57:25.283: INFO: kube-proxy-4j8ft started at 2020-03-06 02:30:22 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 02:57:25.283: INFO: Container kube-proxy ready: true, restart count 0 | |
Mar 6 02:57:25.283: INFO: kube-apiserver-master01 started at 2020-03-06 02:30:22 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 02:57:25.283: INFO: Container kube-apiserver ready: true, restart count 0 | |
Mar 6 02:57:25.283: INFO: kube-controller-manager-master01 started at 2020-03-06 02:30:22 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 02:57:25.283: INFO: Container kube-controller-manager ready: true, restart count 1 | |
Mar 6 02:57:25.283: INFO: kube-scheduler-master01 started at 2020-03-06 02:30:22 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 02:57:25.283: INFO: Container kube-scheduler ready: true, restart count 1 | |
Mar 6 02:57:25.283: INFO: sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-drhpn started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded) | |
Mar 6 02:57:25.283: INFO: Container sonobuoy-worker ready: true, restart count 0 | |
Mar 6 02:57:25.283: INFO: Container systemd-logs ready: true, restart count 0 | |
W0306 02:57:25.285756 19 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. | |
Mar 6 02:57:25.302: INFO: | |
Latency metrics for node master01 | |
Mar 6 02:57:25.302: INFO: | |
Logging node info for node master02 | |
Mar 6 02:57:25.311: INFO: Node Info: &Node{ObjectMeta:{master02 /api/v1/nodes/master02 6a0ecb6f-ef31-4754-858b-3eba76999224 7180 0 2020-03-06 02:29:17 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master02 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"82:c1:38:99:3b:39"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:192.168.1.248 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{42140479488 0} {<nil>} 41152812Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3823214592 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{37926431477 0} {<nil>} 37926431477 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3718356992 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-03-06 02:53:53 +0000 UTC,LastTransitionTime:2020-03-06 02:29:16 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-03-06 02:53:53 +0000 UTC,LastTransitionTime:2020-03-06 02:29:16 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-03-06 02:53:53 +0000 UTC,LastTransitionTime:2020-03-06 02:29:16 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-03-06 02:53:53 +0000 UTC,LastTransitionTime:2020-03-06 02:30:10 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:192.168.1.248,},NodeAddress{Type:Hostname,Address:master02,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:20200220105402131453637367482142,SystemUUID:8B2C4639-6D22-4D0D-A03C-F6D7E328F9D5,BootID:efd7329f-ae31-4806-ba13-7fdd5fad57df,KernelVersion:3.10.0-1062.12.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.5,KubeletVersion:v1.17.3,KubeProxyVersion:v1.17.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[sonobuoy/systemd-logs@sha256:fadad24a66ddd544987c38811108a73d1a306dd3b5e3f090b207786f2825ffde sonobuoy/systemd-logs:v0.3],SizeBytes:297365055,},ContainerImage{Names:[192.168.1.252/library/etcd-amd64@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216 192.168.1.252/library/etcd-amd64:3.4.3-0],SizeBytes:288426917,},ContainerImage{Names:[192.168.1.252/library/kube-apiserver@sha256:4ee4113bcce32ae436b15364b09a2439dee15f6a19bbcf43a470d3dc879b0c62 192.168.1.252/library/kube-apiserver:v1.17.3],SizeBytes:170986003,},ContainerImage{Names:[192.168.1.252/library/kube-controller-manager@sha256:8f03391c0d22e3da8d22725178efe3c4338e1920504f7f4eb4a1c7f5f40c4c6e 192.168.1.252/library/kube-controller-manager:v1.17.3],SizeBytes:160918035,},ContainerImage{Names:[192.168.1.252/library/kube-proxy@sha256:8dfd672298f8fbcd37731e0f2e45be87199cee6ee1f482ffec766693dcec0ec4 192.168.1.252/library/kube-proxy:v1.17.3],SizeBytes:115964919,},ContainerImage{Names:[192.168.1.252/library/kube-scheduler@sha256:eea535b58da5b2fb66b32c61c6913d014cf85061a39527ad9bca2fa84b53dc1c 192.168.1.252/library/kube-scheduler:v1.17.3],SizeBytes:94435859,},ContainerImage{Names:[sonobuoy/sonobuoy@sha256:73f9cfe546ac6d0d5b94308293484c29c3b02fc341a975cfca97c80dd8728ed7 sonobuoy/sonobuoy:v0.17.2],SizeBytes:84339798,},ContainerImage{Names:[192.168.1.252/library/k8s-haproxy@sha256:f88bca67a2782e7bd88af168d5c11216c32104d12fe9240fac54a1d3196e3f9c 192.168.1.252/library/k8s-haproxy:2.0.0],SizeBytes:73550856,},ContainerImage{Names:[192.168.1.252/library/flannel@sha256:bd76b84c74ad70368a2341c2402841b75950df881388e43fc2aca000c546653a 192.168.1.252/library/flannel:v0.11.0-amd64],SizeBytes:52567296,},ContainerImage{Names:[192.168.1.252/library/coredns@sha256:608ac7ccba5ce41c6941fca13bc67059c1eef927fd968b554b790e21cc92543c 192.168.1.252/library/coredns:1.6.5],SizeBytes:41578211,},ContainerImage{Names:[192.168.1.252/library/pause@sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d 192.168.1.252/library/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} | |
Mar 6 02:57:25.311: INFO: | |
Logging kubelet events for node master02 | |
Mar 6 02:57:25.318: INFO: | |
Logging pods the kubelet thinks is on node master02 | |
Mar 6 02:57:25.328: INFO: kube-apiserver-master02 started at 2020-03-06 02:29:50 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 02:57:25.328: INFO: Container kube-apiserver ready: true, restart count 0 | |
Mar 6 02:57:25.328: INFO: kube-controller-manager-master02 started at 2020-03-06 02:29:50 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 02:57:25.328: INFO: Container kube-controller-manager ready: true, restart count 1 | |
Mar 6 02:57:25.328: INFO: kube-scheduler-master02 started at 2020-03-06 02:29:23 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 02:57:25.328: INFO: Container kube-scheduler ready: true, restart count 1 | |
Mar 6 02:57:25.328: INFO: kube-proxy-scdss started at 2020-03-06 02:30:00 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 02:57:25.328: INFO: Container kube-proxy ready: true, restart count 0 | |
Mar 6 02:57:25.328: INFO: kube-flannel-ds-amd64-vfl78 started at 2020-03-06 02:30:00 +0000 UTC (1+1 container statuses recorded) | |
Mar 6 02:57:25.328: INFO: Init container install-cni ready: true, restart count 0 | |
Mar 6 02:57:25.328: INFO: Container kube-flannel ready: true, restart count 0 | |
Mar 6 02:57:25.328: INFO: coredns-7795996659-phdkc started at 2020-03-06 02:30:13 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 02:57:25.328: INFO: Container coredns ready: true, restart count 0 | |
Mar 6 02:57:25.328: INFO: sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-2zmwm started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded) | |
Mar 6 02:57:25.328: INFO: Container sonobuoy-worker ready: true, restart count 0 | |
Mar 6 02:57:25.328: INFO: Container systemd-logs ready: true, restart count 0 | |
W0306 02:57:25.330411 19 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. | |
Mar 6 02:57:25.344: INFO: | |
Latency metrics for node master02 | |
Mar 6 02:57:25.345: INFO: | |
Logging node info for node master03 | |
Mar 6 02:57:25.346: INFO: Node Info: &Node{ObjectMeta:{master03 /api/v1/nodes/master03 c508ee4c-fe9d-4c73-a857-e57fba26fa86 7181 0 2020-03-06 02:29:17 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master03 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"4a:aa:08:ea:16:90"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:192.168.1.249 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{42140479488 0} {<nil>} 41152812Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3823226880 0} {<nil>} 3733620Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{37926431477 0} {<nil>} 37926431477 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3718369280 0} {<nil>} 3631220Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-03-06 02:53:54 +0000 UTC,LastTransitionTime:2020-03-06 02:29:17 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-03-06 02:53:54 +0000 UTC,LastTransitionTime:2020-03-06 02:29:17 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-03-06 02:53:54 +0000 UTC,LastTransitionTime:2020-03-06 02:29:17 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-03-06 02:53:54 +0000 UTC,LastTransitionTime:2020-03-06 02:30:10 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:192.168.1.249,},NodeAddress{Type:Hostname,Address:master03,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:20200220105402131453637367482142,SystemUUID:0C8F0A67-EB7E-42DE-9418-0973FE735A08,BootID:05b1fa23-e6be-4032-bc93-8800264dff91,KernelVersion:3.10.0-1062.12.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.5,KubeletVersion:v1.17.3,KubeProxyVersion:v1.17.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[sonobuoy/systemd-logs@sha256:fadad24a66ddd544987c38811108a73d1a306dd3b5e3f090b207786f2825ffde sonobuoy/systemd-logs:v0.3],SizeBytes:297365055,},ContainerImage{Names:[192.168.1.252/library/etcd-amd64@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216 192.168.1.252/library/etcd-amd64:3.4.3-0],SizeBytes:288426917,},ContainerImage{Names:[192.168.1.252/library/kube-apiserver@sha256:4ee4113bcce32ae436b15364b09a2439dee15f6a19bbcf43a470d3dc879b0c62 192.168.1.252/library/kube-apiserver:v1.17.3],SizeBytes:170986003,},ContainerImage{Names:[192.168.1.252/library/kube-controller-manager@sha256:8f03391c0d22e3da8d22725178efe3c4338e1920504f7f4eb4a1c7f5f40c4c6e 192.168.1.252/library/kube-controller-manager:v1.17.3],SizeBytes:160918035,},ContainerImage{Names:[192.168.1.252/library/dashboard@sha256:4e0d39dae7e089b77fe2bbcef648f89905716db9c1f0884950bfd42d9f446c29 192.168.1.252/library/dashboard:v2.0.0-rc5],SizeBytes:126359420,},ContainerImage{Names:[192.168.1.252/library/kube-proxy@sha256:8dfd672298f8fbcd37731e0f2e45be87199cee6ee1f482ffec766693dcec0ec4 192.168.1.252/library/kube-proxy:v1.17.3],SizeBytes:115964919,},ContainerImage{Names:[192.168.1.252/library/kube-scheduler@sha256:eea535b58da5b2fb66b32c61c6913d014cf85061a39527ad9bca2fa84b53dc1c 192.168.1.252/library/kube-scheduler:v1.17.3],SizeBytes:94435859,},ContainerImage{Names:[sonobuoy/sonobuoy@sha256:73f9cfe546ac6d0d5b94308293484c29c3b02fc341a975cfca97c80dd8728ed7 sonobuoy/sonobuoy:v0.17.2],SizeBytes:84339798,},ContainerImage{Names:[192.168.1.252/library/k8s-haproxy@sha256:f88bca67a2782e7bd88af168d5c11216c32104d12fe9240fac54a1d3196e3f9c 192.168.1.252/library/k8s-haproxy:2.0.0],SizeBytes:73550856,},ContainerImage{Names:[192.168.1.252/library/flannel@sha256:bd76b84c74ad70368a2341c2402841b75950df881388e43fc2aca000c546653a 192.168.1.252/library/flannel:v0.11.0-amd64],SizeBytes:52567296,},ContainerImage{Names:[192.168.1.252/library/coredns@sha256:608ac7ccba5ce41c6941fca13bc67059c1eef927fd968b554b790e21cc92543c 192.168.1.252/library/coredns:1.6.5],SizeBytes:41578211,},ContainerImage{Names:[192.168.1.252/library/metrics-scraper@sha256:e24a74b3b1cdc84d6285d507a12eb06907fd8c457b3e8ae9baa9418eca43efc4 192.168.1.252/library/metrics-scraper:v1.0.3],SizeBytes:40105664,},ContainerImage{Names:[192.168.1.252/library/pause@sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d 192.168.1.252/library/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} | |
Mar 6 02:57:25.346: INFO: | |
Logging kubelet events for node master03 | |
Mar 6 02:57:25.351: INFO: | |
Logging pods the kubelet thinks is on node master03 | |
Mar 6 02:57:25.360: INFO: kube-flannel-ds-amd64-hs69k started at 2020-03-06 02:30:00 +0000 UTC (1+1 container statuses recorded) | |
Mar 6 02:57:25.360: INFO: Init container install-cni ready: true, restart count 0 | |
Mar 6 02:57:25.360: INFO: Container kube-flannel ready: true, restart count 0 | |
Mar 6 02:57:25.360: INFO: dashboard-metrics-scraper-56568cb9d7-d57kl started at 2020-03-06 02:30:10 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 02:57:25.360: INFO: Container dashboard-metrics-scraper ready: true, restart count 0 | |
Mar 6 02:57:25.360: INFO: kube-controller-manager-master03 started at 2020-03-06 02:29:50 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 02:57:25.360: INFO: Container kube-controller-manager ready: true, restart count 1 | |
Mar 6 02:57:25.360: INFO: kube-scheduler-master03 started at 2020-03-06 02:29:38 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 02:57:25.360: INFO: Container kube-scheduler ready: true, restart count 1 | |
Mar 6 02:57:25.360: INFO: kube-proxy-stbnn started at 2020-03-06 02:30:00 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 02:57:25.360: INFO: Container kube-proxy ready: true, restart count 0 | |
Mar 6 02:57:25.360: INFO: kubernetes-dashboard-6647798d59-j2ms4 started at 2020-03-06 02:30:10 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 02:57:25.360: INFO: Container kubernetes-dashboard ready: true, restart count 0 | |
Mar 6 02:57:25.360: INFO: coredns-7795996659-cmq4d started at 2020-03-06 02:30:13 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 02:57:25.360: INFO: Container coredns ready: true, restart count 0 | |
Mar 6 02:57:25.360: INFO: sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-w5psq started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded) | |
Mar 6 02:57:25.360: INFO: Container sonobuoy-worker ready: true, restart count 0 | |
Mar 6 02:57:25.360: INFO: Container systemd-logs ready: true, restart count 0 | |
Mar 6 02:57:25.360: INFO: kube-apiserver-master03 started at 2020-03-06 02:29:24 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 02:57:25.360: INFO: Container kube-apiserver ready: true, restart count 0 | |
W0306 02:57:25.363325 19 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. | |
Mar 6 02:57:25.378: INFO: | |
Latency metrics for node master03 | |
Mar 6 02:57:25.378: INFO: | |
Logging node info for node worker01 | |
Mar 6 02:57:25.380: INFO: Node Info: &Node{ObjectMeta:{worker01 /api/v1/nodes/worker01 cf4203bb-1bfa-4b35-991f-935275b6bc46 7294 0 2020-03-06 02:30:30 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:worker01 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"5a:49:f5:5b:74:b3"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:192.168.1.250 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{42140479488 0} {<nil>} 41152812Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3823214592 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{37926431477 0} {<nil>} 37926431477 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3718356992 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-03-06 02:54:19 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-03-06 02:54:19 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-03-06 02:54:19 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-03-06 02:54:19 +0000 UTC,LastTransitionTime:2020-03-06 02:30:45 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:192.168.1.250,},NodeAddress{Type:Hostname,Address:worker01,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:20200220105402131453637367482142,SystemUUID:5CA364AA-0FF8-4B57-BA86-F28699575F0D,BootID:c85ad0c4-ebcf-4d01-97f0-a36c1cfc50be,KernelVersion:3.10.0-1062.12.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.5,KubeletVersion:v1.17.3,KubeProxyVersion:v1.17.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[sonobuoy/systemd-logs@sha256:fadad24a66ddd544987c38811108a73d1a306dd3b5e3f090b207786f2825ffde sonobuoy/systemd-logs:v0.3],SizeBytes:297365055,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[192.168.1.252/library/kube-proxy@sha256:8dfd672298f8fbcd37731e0f2e45be87199cee6ee1f482ffec766693dcec0ec4 192.168.1.252/library/kube-proxy:v1.17.3],SizeBytes:115964919,},ContainerImage{Names:[sonobuoy/sonobuoy@sha256:73f9cfe546ac6d0d5b94308293484c29c3b02fc341a975cfca97c80dd8728ed7 sonobuoy/sonobuoy:v0.17.2],SizeBytes:84339798,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5 gcr.io/kubernetes-e2e-test-images/agnhost:2.8],SizeBytes:52800335,},ContainerImage{Names:[192.168.1.252/library/flannel@sha256:bd76b84c74ad70368a2341c2402841b75950df881388e43fc2aca000c546653a 192.168.1.252/library/flannel:v0.11.0-amd64],SizeBytes:52567296,},ContainerImage{Names:[192.168.1.252/library/metrics-server-amd64@sha256:c9c4e95068b51d6b33a9dccc61875df07dc650abbf4ac1a19d58b4628f89288b 192.168.1.252/library/metrics-server-amd64:v0.3.6],SizeBytes:39944451,},ContainerImage{Names:[192.168.1.252/library/kuard-amd64@sha256:bd17153e9a3319f401acc7a27759243f37d422c06cbbf01cb3e1f54bbbfe14f4 192.168.1.252/library/kuard-amd64:1],SizeBytes:19745911,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[192.168.1.252/library/pause@sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d 192.168.1.252/library/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} | |
Mar 6 02:57:25.380: INFO: | |
Logging kubelet events for node worker01 | |
Mar 6 02:57:25.384: INFO: | |
Logging pods the kubelet thinks is on node worker01 | |
Mar 6 02:57:25.397: INFO: metrics-server-78799bf646-xrsnn started at 2020-03-06 02:30:51 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 02:57:25.397: INFO: Container metrics-server ready: true, restart count 0 | |
Mar 6 02:57:25.397: INFO: kuard-678c676f5d-vsn86 started at 2020-03-06 02:30:51 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 02:57:25.397: INFO: Container kuard ready: true, restart count 0 | |
Mar 6 02:57:25.397: INFO: kube-flannel-ds-amd64-xxhz9 started at 2020-03-06 02:30:30 +0000 UTC (1+1 container statuses recorded) | |
Mar 6 02:57:25.397: INFO: Init container install-cni ready: true, restart count 0 | |
Mar 6 02:57:25.397: INFO: Container kube-flannel ready: true, restart count 1 | |
Mar 6 02:57:25.397: INFO: contour-54748c65f5-jl5wz started at 2020-03-06 02:30:46 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 02:57:25.397: INFO: Container contour ready: false, restart count 0 | |
Mar 6 02:57:25.397: INFO: contour-certgen-82k46 started at 2020-03-06 02:30:46 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 02:57:25.397: INFO: Container contour ready: false, restart count 0 | |
Mar 6 02:57:25.397: INFO: kuard-678c676f5d-m29b6 started at 2020-03-06 02:30:49 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 02:57:25.397: INFO: Container kuard ready: true, restart count 0 | |
Mar 6 02:57:25.397: INFO: kuard-678c676f5d-tzsnn started at 2020-03-06 02:30:51 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 02:57:25.397: INFO: Container kuard ready: true, restart count 0 | |
Mar 6 02:57:25.397: INFO: contour-54748c65f5-gk5sz started at 2020-03-06 02:30:51 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 02:57:25.397: INFO: Container contour ready: false, restart count 0 | |
Mar 6 02:57:25.397: INFO: sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-2bz8g started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded) | |
Mar 6 02:57:25.397: INFO: Container sonobuoy-worker ready: true, restart count 0 | |
Mar 6 02:57:25.397: INFO: Container systemd-logs ready: true, restart count 0 | |
Mar 6 02:57:25.397: INFO: kube-proxy-kcb8f started at 2020-03-06 02:30:30 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 02:57:25.397: INFO: Container kube-proxy ready: true, restart count 0 | |
Mar 6 02:57:25.397: INFO: envoy-lvmcb started at 2020-03-06 02:30:45 +0000 UTC (1+1 container statuses recorded) | |
Mar 6 02:57:25.397: INFO: Init container envoy-initconfig ready: false, restart count 0 | |
Mar 6 02:57:25.397: INFO: Container envoy ready: false, restart count 0 | |
W0306 02:57:25.400010 19 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. | |
Mar 6 02:57:25.427: INFO: | |
Latency metrics for node worker01 | |
Mar 6 02:57:25.427: INFO: | |
Logging node info for node worker02 | |
Mar 6 02:57:25.429: INFO: Node Info: &Node{ObjectMeta:{worker02 /api/v1/nodes/worker02 f0994ba1-7e4e-4cc8-b3c8-25d34b25d9ce 7669 0 2020-03-06 02:30:30 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:worker02 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"1a:75:0a:e8:cc:76"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:192.168.1.251 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{42140479488 0} {<nil>} 41152812Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3823214592 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{37926431477 0} {<nil>} 37926431477 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3718356992 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-03-06 02:55:18 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-03-06 02:55:18 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-03-06 02:55:18 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-03-06 02:55:18 +0000 UTC,LastTransitionTime:2020-03-06 02:30:55 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:192.168.1.251,},NodeAddress{Type:Hostname,Address:worker02,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:20200220105402131453637367482142,SystemUUID:EDBF7E33-228B-4233-93CF-7850B5A311E4,BootID:bd6a4f0f-5ddb-4585-83df-253b9292b617,KernelVersion:3.10.0-1062.12.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.5,KubeletVersion:v1.17.3,KubeProxyVersion:v1.17.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/google-containers/conformance@sha256:502434491cbc3fac5d9a606a879e554cf881b2ba5b688bed25f2c33d3ff1c777 gcr.io/google-containers/conformance:v1.17.3],SizeBytes:575831882,},ContainerImage{Names:[sonobuoy/systemd-logs@sha256:fadad24a66ddd544987c38811108a73d1a306dd3b5e3f090b207786f2825ffde sonobuoy/systemd-logs:v0.3],SizeBytes:297365055,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[192.168.1.252/library/kube-proxy@sha256:8dfd672298f8fbcd37731e0f2e45be87199cee6ee1f482ffec766693dcec0ec4 192.168.1.252/library/kube-proxy:v1.17.3],SizeBytes:115964919,},ContainerImage{Names:[sonobuoy/sonobuoy@sha256:73f9cfe546ac6d0d5b94308293484c29c3b02fc341a975cfca97c80dd8728ed7 sonobuoy/sonobuoy:v0.17.2],SizeBytes:84339798,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5 gcr.io/kubernetes-e2e-test-images/agnhost:2.8],SizeBytes:52800335,},ContainerImage{Names:[192.168.1.252/library/flannel@sha256:bd76b84c74ad70368a2341c2402841b75950df881388e43fc2aca000c546653a 192.168.1.252/library/flannel:v0.11.0-amd64],SizeBytes:52567296,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0],SizeBytes:4732240,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0],SizeBytes:1450451,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[192.168.1.252/library/pause@sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea 192.168.1.252/library/pause:3.1 k8s.gcr.io/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} | |
Mar 6 02:57:25.429: INFO: | |
Logging kubelet events for node worker02 | |
Mar 6 02:57:25.432: INFO: | |
Logging pods the kubelet thinks is on node worker02 | |
Mar 6 02:57:25.436: INFO: kube-flannel-ds-amd64-ztfzf started at 2020-03-06 02:30:30 +0000 UTC (1+1 container statuses recorded) | |
Mar 6 02:57:25.436: INFO: Init container install-cni ready: true, restart count 0 | |
Mar 6 02:57:25.436: INFO: Container kube-flannel ready: true, restart count 0 | |
Mar 6 02:57:25.436: INFO: envoy-wgz76 started at 2020-03-06 02:30:55 +0000 UTC (1+1 container statuses recorded) | |
Mar 6 02:57:25.436: INFO: Init container envoy-initconfig ready: false, restart count 0 | |
Mar 6 02:57:25.436: INFO: Container envoy ready: false, restart count 0 | |
Mar 6 02:57:25.436: INFO: sonobuoy-e2e-job-67137ff64ac145d3 started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded) | |
Mar 6 02:57:25.436: INFO: Container e2e ready: true, restart count 0 | |
Mar 6 02:57:25.436: INFO: Container sonobuoy-worker ready: true, restart count 0 | |
Mar 6 02:57:25.436: INFO: sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-bpjtd started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded) | |
Mar 6 02:57:25.436: INFO: Container sonobuoy-worker ready: true, restart count 0 | |
Mar 6 02:57:25.436: INFO: Container systemd-logs ready: true, restart count 0 | |
Mar 6 02:57:25.436: INFO: kube-proxy-5xxdb started at 2020-03-06 02:30:30 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 02:57:25.436: INFO: Container kube-proxy ready: true, restart count 1 | |
Mar 6 02:57:25.436: INFO: sonobuoy started at 2020-03-06 02:38:02 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 02:57:25.436: INFO: Container kube-sonobuoy ready: true, restart count 0 | |
Mar 6 02:57:25.436: INFO: sample-webhook-deployment-5f65f8c764-d7djs started at 2020-03-06 02:56:31 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 02:57:25.436: INFO: Container sample-webhook ready: true, restart count 0 | |
W0306 02:57:25.438622 19 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. | |
Mar 6 02:57:25.458: INFO: | |
Latency metrics for node worker02 | |
Mar 6 02:57:25.458: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
STEP: Destroying namespace "webhook-4758" for this suite. | |
STEP: Destroying namespace "webhook-4758-markers" for this suite. | |
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 | |
• Failure [54.252 seconds] | |
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 | |
should mutate pod and apply defaults after mutation [Conformance] [It] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
Mar 6 02:57:25.259: waiting for webhook configuration to be ready | |
Unexpected error: | |
<*errors.errorString | 0xc0000b3950>: { | |
s: "timed out waiting for the condition", | |
} | |
timed out waiting for the condition | |
occurred | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:1055 | |
------------------------------ | |
{"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":278,"completed":45,"skipped":888,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]"]} | |
SS | |
------------------------------ | |
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] | |
works for CRD preserving unknown fields at the schema root [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 | |
STEP: Creating a kubernetes client | |
Mar 6 02:57:25.529: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 | |
STEP: Building a namespace api object, basename crd-publish-openapi | |
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in crd-publish-openapi-9035 | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] works for CRD preserving unknown fields at the schema root [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
Mar 6 02:57:25.679: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 | |
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties | |
Mar 6 02:57:33.456: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 --namespace=crd-publish-openapi-9035 create -f -' | |
Mar 6 02:57:33.930: INFO: stderr: "" | |
Mar 6 02:57:33.930: INFO: stdout: "e2e-test-crd-publish-openapi-3692-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" | |
Mar 6 02:57:33.930: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 --namespace=crd-publish-openapi-9035 delete e2e-test-crd-publish-openapi-3692-crds test-cr' | |
Mar 6 02:57:34.009: INFO: stderr: "" | |
Mar 6 02:57:34.009: INFO: stdout: "e2e-test-crd-publish-openapi-3692-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" | |
Mar 6 02:57:34.009: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 --namespace=crd-publish-openapi-9035 apply -f -' | |
Mar 6 02:57:34.156: INFO: stderr: "" | |
Mar 6 02:57:34.156: INFO: stdout: "e2e-test-crd-publish-openapi-3692-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" | |
Mar 6 02:57:34.156: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 --namespace=crd-publish-openapi-9035 delete e2e-test-crd-publish-openapi-3692-crds test-cr' | |
Mar 6 02:57:34.233: INFO: stderr: "" | |
Mar 6 02:57:34.233: INFO: stdout: "e2e-test-crd-publish-openapi-3692-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" | |
STEP: kubectl explain works to explain CR | |
Mar 6 02:57:34.233: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 explain e2e-test-crd-publish-openapi-3692-crds' | |
Mar 6 02:57:34.433: INFO: stderr: "" | |
Mar 6 02:57:34.433: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-3692-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n <empty>\n" | |
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 | |
Mar 6 02:57:37.177: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
STEP: Destroying namespace "crd-publish-openapi-9035" for this suite. | |
• [SLOW TEST:11.656 seconds] | |
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 | |
works for CRD preserving unknown fields at the schema root [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
------------------------------ | |
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":278,"completed":46,"skipped":890,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]"]} | |
SSSSSS | |
------------------------------ | |
[sig-storage] Projected secret | |
should be consumable from pods in volume with mappings [NodeConformance] [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
[BeforeEach] [sig-storage] Projected secret | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 | |
STEP: Creating a kubernetes client | |
Mar 6 02:57:37.185: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 | |
STEP: Building a namespace api object, basename projected | |
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-5529 | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
STEP: Creating projection with secret that has name projected-secret-test-map-19319ffa-02ef-48f1-8d9a-835deed2a25a | |
STEP: Creating a pod to test consume secrets | |
Mar 6 02:57:37.342: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-8824f421-1259-44d9-b8c2-b77f999f153b" in namespace "projected-5529" to be "success or failure" | |
Mar 6 02:57:37.345: INFO: Pod "pod-projected-secrets-8824f421-1259-44d9-b8c2-b77f999f153b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.983062ms | |
Mar 6 02:57:39.348: INFO: Pod "pod-projected-secrets-8824f421-1259-44d9-b8c2-b77f999f153b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005548591s | |
STEP: Saw pod success | |
Mar 6 02:57:39.348: INFO: Pod "pod-projected-secrets-8824f421-1259-44d9-b8c2-b77f999f153b" satisfied condition "success or failure" | |
Mar 6 02:57:39.350: INFO: Trying to get logs from node worker02 pod pod-projected-secrets-8824f421-1259-44d9-b8c2-b77f999f153b container projected-secret-volume-test: <nil> | |
STEP: delete the pod | |
Mar 6 02:57:39.363: INFO: Waiting for pod pod-projected-secrets-8824f421-1259-44d9-b8c2-b77f999f153b to disappear | |
Mar 6 02:57:39.366: INFO: Pod pod-projected-secrets-8824f421-1259-44d9-b8c2-b77f999f153b no longer exists | |
[AfterEach] [sig-storage] Projected secret | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 | |
Mar 6 02:57:39.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
STEP: Destroying namespace "projected-5529" for this suite. | |
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":47,"skipped":896,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]"]} | |
SSSSSSSSSSSSSSSSSS | |
------------------------------ | |
[sig-apps] ReplicaSet | |
should adopt matching pods on creation and release no longer matching pods [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
[BeforeEach] [sig-apps] ReplicaSet | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 | |
STEP: Creating a kubernetes client | |
Mar 6 02:57:39.372: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 | |
STEP: Building a namespace api object, basename replicaset | |
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in replicaset-5003 | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] should adopt matching pods on creation and release no longer matching pods [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
STEP: Given a Pod with a 'name' label pod-adoption-release is created | |
STEP: When a replicaset with a matching selector is created | |
STEP: Then the orphan pod is adopted | |
STEP: When the matched label of one of its pods change | |
Mar 6 02:57:42.526: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 | |
STEP: Then the pod is released | |
[AfterEach] [sig-apps] ReplicaSet | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 | |
Mar 6 02:57:43.539: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
STEP: Destroying namespace "replicaset-5003" for this suite. | |
•{"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":278,"completed":48,"skipped":914,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]"]} | |
SSSSSSSSSSSSSSSSSSSS | |
------------------------------ | |
[sig-network] Proxy version v1 | |
should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
[BeforeEach] version v1 | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 | |
STEP: Creating a kubernetes client | |
Mar 6 02:57:43.549: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 | |
STEP: Building a namespace api object, basename proxy | |
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in proxy-411 | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
Mar 6 02:57:43.694: INFO: (0) /api/v1/nodes/worker01:10250/proxy/logs/: <pre> | |
<a href="anaconda/">anaconda/</a> | |
<a href="audit/">audit/</a> | |
<a href="boot.log">boot.log</... (200; 4.979088ms) | |
Mar 6 02:57:43.697: INFO: (1) /api/v1/nodes/worker01:10250/proxy/logs/: <pre> | |
<a href="anaconda/">anaconda/</a> | |
<a href="audit/">audit/</a> | |
<a href="boot.log">boot.log</... (200; 2.70101ms) | |
Mar 6 02:57:43.699: INFO: (2) /api/v1/nodes/worker01:10250/proxy/logs/: <pre> | |
<a href="anaconda/">anaconda/</a> | |
<a href="audit/">audit/</a> | |
<a href="boot.log">boot.log</... (200; 2.364948ms) | |
Mar 6 02:57:43.703: INFO: (3) /api/v1/nodes/worker01:10250/proxy/logs/: <pre> | |
<a href="anaconda/">anaconda/</a> | |
<a href="audit/">audit/</a> | |
<a href="boot.log">boot.log</... (200; 3.497401ms) | |
Mar 6 02:57:43.710: INFO: (4) /api/v1/nodes/worker01:10250/proxy/logs/: <pre> | |
<a href="anaconda/">anaconda/</a> | |
<a href="audit/">audit/</a> | |
<a href="boot.log">boot.log</... (200; 7.637097ms) | |
Mar 6 02:57:43.713: INFO: (5) /api/v1/nodes/worker01:10250/proxy/logs/: <pre> | |
<a href="anaconda/">anaconda/</a> | |
<a href="audit/">audit/</a> | |
<a href="boot.log">boot.log</... (200; 2.280585ms) | |
Mar 6 02:57:43.718: INFO: (6) /api/v1/nodes/worker01:10250/proxy/logs/: <pre> | |
<a href="anaconda/">anaconda/</a> | |
<a href="audit/">audit/</a> | |
<a href="boot.log">boot.log</... (200; 5.821246ms) | |
Mar 6 02:57:43.721: INFO: (7) /api/v1/nodes/worker01:10250/proxy/logs/: <pre> | |
<a href="anaconda/">anaconda/</a> | |
<a href="audit/">audit/</a> | |
<a href="boot.log">boot.log</... (200; 2.419524ms) | |
Mar 6 02:57:43.723: INFO: (8) /api/v1/nodes/worker01:10250/proxy/logs/: <pre> | |
<a href="anaconda/">anaconda/</a> | |
<a href="audit/">audit/</a> | |
<a href="boot.log">boot.log</... (200; 1.993129ms) | |
Mar 6 02:57:43.725: INFO: (9) /api/v1/nodes/worker01:10250/proxy/logs/: <pre> | |
<a href="anaconda/">anaconda/</a> | |
<a href="audit/">audit/</a> | |
<a href="boot.log">boot.log</... (200; 2.401539ms) | |
Mar 6 02:57:43.728: INFO: (10) /api/v1/nodes/worker01:10250/proxy/logs/: <pre> | |
<a href="anaconda/">anaconda/</a> | |
<a href="audit/">audit/</a> | |
<a href="boot.log">boot.log</... (200; 2.267057ms) | |
Mar 6 02:57:43.730: INFO: (11) /api/v1/nodes/worker01:10250/proxy/logs/: <pre> | |
<a href="anaconda/">anaconda/</a> | |
<a href="audit/">audit/</a> | |
<a href="boot.log">boot.log</... (200; 1.99706ms) | |
Mar 6 02:57:43.732: INFO: (12) /api/v1/nodes/worker01:10250/proxy/logs/: <pre> | |
<a href="anaconda/">anaconda/</a> | |
<a href="audit/">audit/</a> | |
<a href="boot.log">boot.log</... (200; 2.449925ms) | |
Mar 6 02:57:43.734: INFO: (13) /api/v1/nodes/worker01:10250/proxy/logs/: <pre> | |
<a href="anaconda/">anaconda/</a> | |
<a href="audit/">audit/</a> | |
<a href="boot.log">boot.log</... (200; 2.221879ms) | |
Mar 6 02:57:43.736: INFO: (14) /api/v1/nodes/worker01:10250/proxy/logs/: <pre> | |
<a href="anaconda/">anaconda/</a> | |
<a href="audit/">audit/</a> | |
<a href="boot.log">boot.log</... (200; 2.012024ms) | |
Mar 6 02:57:43.739: INFO: (15) /api/v1/nodes/worker01:10250/proxy/logs/: <pre> | |
<a href="anaconda/">anaconda/</a> | |
<a href="audit/">audit/</a> | |
<a href="boot.log">boot.log</... (200; 2.884404ms) | |
Mar 6 02:57:43.741: INFO: (16) /api/v1/nodes/worker01:10250/proxy/logs/: <pre> | |
<a href="anaconda/">anaconda/</a> | |
<a href="audit/">audit/</a> | |
<a href="boot.log">boot.log</... (200; 2.236627ms) | |
Mar 6 02:57:43.744: INFO: (17) /api/v1/nodes/worker01:10250/proxy/logs/: <pre> | |
<a href="anaconda/">anaconda/</a> | |
<a href="audit/">audit/</a> | |
<a href="boot.log">boot.log</... (200; 2.090002ms) | |
Mar 6 02:57:43.746: INFO: (18) /api/v1/nodes/worker01:10250/proxy/logs/: <pre> | |
<a href="anaconda/">anaconda/</a> | |
<a href="audit/">audit/</a> | |
<a href="boot.log">boot.log</... (200; 2.560554ms) | |
Mar 6 02:57:43.748: INFO: (19) /api/v1/nodes/worker01:10250/proxy/logs/: <pre> | |
<a href="anaconda/">anaconda/</a> | |
<a href="audit/">audit/</a> | |
<a href="boot.log">boot.log</... (200; 2.34903ms) | |
[AfterEach] version v1 | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 | |
Mar 6 02:57:43.748: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
STEP: Destroying namespace "proxy-411" for this suite. | |
•{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance]","total":278,"completed":49,"skipped":934,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]"]} | |
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS | |
------------------------------ | |
[sig-network] Services | |
should be able to change the type from ExternalName to NodePort [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
[BeforeEach] [sig-network] Services | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 | |
STEP: Creating a kubernetes client | |
Mar 6 02:57:43.755: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 | |
STEP: Building a namespace api object, basename services | |
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in services-2127 | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] [sig-network] Services | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 | |
[It] should be able to change the type from ExternalName to NodePort [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
STEP: creating a service externalname-service with the type=ExternalName in namespace services-2127 | |
STEP: changing the ExternalName service to type=NodePort | |
STEP: creating replication controller externalname-service in namespace services-2127 | |
I0306 02:57:43.924347 19 runners.go:189] Created replication controller with name: externalname-service, namespace: services-2127, replica count: 2 | |
Mar 6 02:57:46.974: INFO: Creating new exec pod | |
I0306 02:57:46.974597 19 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady | |
Mar 6 02:57:49.988: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 exec --namespace=services-2127 execpodqvhl8 -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' | |
Mar 6 02:57:50.178: INFO: stderr: "+ nc -zv -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" | |
Mar 6 02:57:50.178: INFO: stdout: "" | |
Mar 6 02:57:50.178: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 exec --namespace=services-2127 execpodqvhl8 -- /bin/sh -x -c nc -zv -t -w 2 10.102.171.180 80' | |
Mar 6 02:57:50.390: INFO: stderr: "+ nc -zv -t -w 2 10.102.171.180 80\nConnection to 10.102.171.180 80 port [tcp/http] succeeded!\n" | |
Mar 6 02:57:50.390: INFO: stdout: "" | |
Mar 6 02:57:50.390: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 exec --namespace=services-2127 execpodqvhl8 -- /bin/sh -x -c nc -zv -t -w 2 192.168.1.250 31433' | |
Mar 6 02:57:50.598: INFO: stderr: "+ nc -zv -t -w 2 192.168.1.250 31433\nConnection to 192.168.1.250 31433 port [tcp/31433] succeeded!\n" | |
Mar 6 02:57:50.598: INFO: stdout: "" | |
Mar 6 02:57:50.598: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 exec --namespace=services-2127 execpodqvhl8 -- /bin/sh -x -c nc -zv -t -w 2 192.168.1.251 31433' | |
Mar 6 02:57:52.825: INFO: rc: 1 | |
Mar 6 02:57:52.825: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 exec --namespace=services-2127 execpodqvhl8 -- /bin/sh -x -c nc -zv -t -w 2 192.168.1.251 31433: | |
Command stdout: | |
stderr: | |
+ nc -zv -t -w 2 192.168.1.251 31433 | |
nc: connect to 192.168.1.251 port 31433 (tcp) timed out: Operation in progress | |
command terminated with exit code 1 | |
error: | |
exit status 1 | |
Retrying... | |
Mar 6 02:57:53.825: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 exec --namespace=services-2127 execpodqvhl8 -- /bin/sh -x -c nc -zv -t -w 2 192.168.1.251 31433' | |
Mar 6 02:57:54.002: INFO: stderr: "+ nc -zv -t -w 2 192.168.1.251 31433\nConnection to 192.168.1.251 31433 port [tcp/31433] succeeded!\n" | |
Mar 6 02:57:54.002: INFO: stdout: "" | |
Mar 6 02:57:54.002: INFO: Cleaning up the ExternalName to NodePort test service | |
[AfterEach] [sig-network] Services | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 | |
Mar 6 02:57:54.048: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
STEP: Destroying namespace "services-2127" for this suite. | |
[AfterEach] [sig-network] Services | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 | |
• [SLOW TEST:10.309 seconds] | |
[sig-network] Services | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 | |
should be able to change the type from ExternalName to NodePort [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
------------------------------ | |
{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":278,"completed":50,"skipped":972,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]"]} | |
SS | |
------------------------------ | |
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] | |
works for multiple CRDs of different groups [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 | |
STEP: Creating a kubernetes client | |
Mar 6 02:57:54.064: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 | |
STEP: Building a namespace api object, basename crd-publish-openapi | |
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in crd-publish-openapi-3452 | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] works for multiple CRDs of different groups [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation | |
Mar 6 02:57:54.223: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 | |
Mar 6 02:58:02.105: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 | |
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 | |
Mar 6 02:58:18.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
STEP: Destroying namespace "crd-publish-openapi-3452" for this suite. | |
• [SLOW TEST:23.975 seconds] | |
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 | |
works for multiple CRDs of different groups [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
------------------------------ | |
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":278,"completed":51,"skipped":974,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]"]} | |
SSSSS | |
------------------------------ | |
[sig-storage] ConfigMap | |
binary data should be reflected in volume [NodeConformance] [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
[BeforeEach] [sig-storage] ConfigMap | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 | |
STEP: Creating a kubernetes client | |
Mar 6 02:58:18.040: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 | |
STEP: Building a namespace api object, basename configmap | |
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-6861 | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] binary data should be reflected in volume [NodeConformance] [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
STEP: Creating configMap with name configmap-test-upd-56e2b002-70f7-4243-bc70-64e3ac944e2f | |
STEP: Creating the pod | |
STEP: Waiting for pod with text data | |
STEP: Waiting for pod with binary data | |
[AfterEach] [sig-storage] ConfigMap | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 | |
Mar 6 02:58:20.209: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
STEP: Destroying namespace "configmap-6861" for this suite. | |
•{"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":52,"skipped":979,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]"]} | |
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS | |
------------------------------ | |
[sig-node] ConfigMap | |
should be consumable via the environment [NodeConformance] [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
[BeforeEach] [sig-node] ConfigMap | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 | |
STEP: Creating a kubernetes client | |
Mar 6 02:58:20.219: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 | |
STEP: Building a namespace api object, basename configmap | |
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-4110 | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] should be consumable via the environment [NodeConformance] [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
STEP: Creating configMap configmap-4110/configmap-test-8967a316-569d-4ca0-9717-1f3a2bb68d48 | |
STEP: Creating a pod to test consume configMaps | |
Mar 6 02:58:20.356: INFO: Waiting up to 5m0s for pod "pod-configmaps-16b7814e-a77b-4cf1-b5ae-2bf56809e46a" in namespace "configmap-4110" to be "success or failure" | |
Mar 6 02:58:20.357: INFO: Pod "pod-configmaps-16b7814e-a77b-4cf1-b5ae-2bf56809e46a": Phase="Pending", Reason="", readiness=false. Elapsed: 1.791524ms | |
Mar 6 02:58:22.360: INFO: Pod "pod-configmaps-16b7814e-a77b-4cf1-b5ae-2bf56809e46a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.004570974s | |
STEP: Saw pod success | |
Mar 6 02:58:22.360: INFO: Pod "pod-configmaps-16b7814e-a77b-4cf1-b5ae-2bf56809e46a" satisfied condition "success or failure" | |
Mar 6 02:58:22.362: INFO: Trying to get logs from node worker02 pod pod-configmaps-16b7814e-a77b-4cf1-b5ae-2bf56809e46a container env-test: <nil> | |
STEP: delete the pod | |
Mar 6 02:58:22.376: INFO: Waiting for pod pod-configmaps-16b7814e-a77b-4cf1-b5ae-2bf56809e46a to disappear | |
Mar 6 02:58:22.378: INFO: Pod pod-configmaps-16b7814e-a77b-4cf1-b5ae-2bf56809e46a no longer exists | |
[AfterEach] [sig-node] ConfigMap | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 | |
Mar 6 02:58:22.378: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
STEP: Destroying namespace "configmap-4110" for this suite. | |
•{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":53,"skipped":1014,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]"]} | |
SSSSSSSSSSSSSSSSSSSS | |
------------------------------ | |
[sig-storage] ConfigMap | |
should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
[BeforeEach] [sig-storage] ConfigMap | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 | |
STEP: Creating a kubernetes client | |
Mar 6 02:58:22.385: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 | |
STEP: Building a namespace api object, basename configmap | |
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-4430 | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
STEP: Creating configMap with name configmap-test-volume-d617537b-617b-481f-896f-5f75338c9316 | |
STEP: Creating a pod to test consume configMaps | |
Mar 6 02:58:22.536: INFO: Waiting up to 5m0s for pod "pod-configmaps-075c27e1-cf03-44ec-9306-e4131c494977" in namespace "configmap-4430" to be "success or failure" | |
Mar 6 02:58:22.539: INFO: Pod "pod-configmaps-075c27e1-cf03-44ec-9306-e4131c494977": Phase="Pending", Reason="", readiness=false. Elapsed: 2.340189ms | |
Mar 6 02:58:24.541: INFO: Pod "pod-configmaps-075c27e1-cf03-44ec-9306-e4131c494977": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.004793484s | |
STEP: Saw pod success | |
Mar 6 02:58:24.541: INFO: Pod "pod-configmaps-075c27e1-cf03-44ec-9306-e4131c494977" satisfied condition "success or failure" | |
Mar 6 02:58:24.543: INFO: Trying to get logs from node worker02 pod pod-configmaps-075c27e1-cf03-44ec-9306-e4131c494977 container configmap-volume-test: <nil> | |
STEP: delete the pod | |
Mar 6 02:58:24.557: INFO: Waiting for pod pod-configmaps-075c27e1-cf03-44ec-9306-e4131c494977 to disappear | |
Mar 6 02:58:24.560: INFO: Pod pod-configmaps-075c27e1-cf03-44ec-9306-e4131c494977 no longer exists | |
[AfterEach] [sig-storage] ConfigMap | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 | |
Mar 6 02:58:24.560: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
STEP: Destroying namespace "configmap-4430" for this suite. | |
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":54,"skipped":1034,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]"]} | |
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS | |
------------------------------ | |
[sig-node] Downward API | |
should provide host IP as an env var [NodeConformance] [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
[BeforeEach] [sig-node] Downward API | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 | |
STEP: Creating a kubernetes client | |
Mar 6 02:58:24.567: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 | |
STEP: Building a namespace api object, basename downward-api | |
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-3206 | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] should provide host IP as an env var [NodeConformance] [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
STEP: Creating a pod to test downward api env vars | |
Mar 6 02:58:24.709: INFO: Waiting up to 5m0s for pod "downward-api-e11faed6-8187-4330-a238-7a28e4d0204a" in namespace "downward-api-3206" to be "success or failure" | |
Mar 6 02:58:24.721: INFO: Pod "downward-api-e11faed6-8187-4330-a238-7a28e4d0204a": Phase="Pending", Reason="", readiness=false. Elapsed: 11.862051ms | |
Mar 6 02:58:26.723: INFO: Pod "downward-api-e11faed6-8187-4330-a238-7a28e4d0204a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.013985369s | |
STEP: Saw pod success | |
Mar 6 02:58:26.723: INFO: Pod "downward-api-e11faed6-8187-4330-a238-7a28e4d0204a" satisfied condition "success or failure" | |
Mar 6 02:58:26.726: INFO: Trying to get logs from node worker02 pod downward-api-e11faed6-8187-4330-a238-7a28e4d0204a container dapi-container: <nil> | |
STEP: delete the pod | |
Mar 6 02:58:26.740: INFO: Waiting for pod downward-api-e11faed6-8187-4330-a238-7a28e4d0204a to disappear | |
Mar 6 02:58:26.742: INFO: Pod downward-api-e11faed6-8187-4330-a238-7a28e4d0204a no longer exists | |
[AfterEach] [sig-node] Downward API | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 | |
Mar 6 02:58:26.742: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
STEP: Destroying namespace "downward-api-3206" for this suite. | |
•{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":278,"completed":55,"skipped":1078,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]"]} | |
SSSSSSSSSSSSSSSS | |
------------------------------ | |
[k8s.io] Probing container | |
should have monotonically increasing restart count [NodeConformance] [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
[BeforeEach] [k8s.io] Probing container | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 | |
STEP: Creating a kubernetes client | |
Mar 6 02:58:26.749: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 | |
STEP: Building a namespace api object, basename container-probe | |
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-probe-1621 | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] [k8s.io] Probing container | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 | |
[It] should have monotonically increasing restart count [NodeConformance] [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
STEP: Creating pod liveness-c1ea9a92-f2c3-4e28-9a71-0c5019eec319 in namespace container-probe-1621 | |
Mar 6 02:58:28.890: INFO: Started pod liveness-c1ea9a92-f2c3-4e28-9a71-0c5019eec319 in namespace container-probe-1621 | |
STEP: checking the pod's current state and verifying that restartCount is present | |
Mar 6 02:58:28.892: INFO: Initial restart count of pod liveness-c1ea9a92-f2c3-4e28-9a71-0c5019eec319 is 0 | |
Mar 6 02:58:42.916: INFO: Restart count of pod container-probe-1621/liveness-c1ea9a92-f2c3-4e28-9a71-0c5019eec319 is now 1 (14.023778808s elapsed) | |
Mar 6 02:59:00.938: INFO: Restart count of pod container-probe-1621/liveness-c1ea9a92-f2c3-4e28-9a71-0c5019eec319 is now 2 (32.046591523s elapsed) | |
Mar 6 02:59:20.964: INFO: Restart count of pod container-probe-1621/liveness-c1ea9a92-f2c3-4e28-9a71-0c5019eec319 is now 3 (52.071703627s elapsed) | |
Mar 6 02:59:42.992: INFO: Restart count of pod container-probe-1621/liveness-c1ea9a92-f2c3-4e28-9a71-0c5019eec319 is now 4 (1m14.099871943s elapsed) | |
Mar 6 03:00:47.084: INFO: Restart count of pod container-probe-1621/liveness-c1ea9a92-f2c3-4e28-9a71-0c5019eec319 is now 5 (2m18.192333706s elapsed) | |
STEP: deleting the pod | |
[AfterEach] [k8s.io] Probing container | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 | |
Mar 6 03:00:47.092: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
STEP: Destroying namespace "container-probe-1621" for this suite. | |
• [SLOW TEST:140.351 seconds] | |
[k8s.io] Probing container | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 | |
should have monotonically increasing restart count [NodeConformance] [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
------------------------------ | |
{"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":278,"completed":56,"skipped":1094,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]"]} | |
SSSSSSSSSSSSSSSS | |
------------------------------ | |
[k8s.io] Container Runtime blackbox test on terminated container | |
should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
[BeforeEach] [k8s.io] Container Runtime | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 | |
STEP: Creating a kubernetes client | |
Mar 6 03:00:47.100: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 | |
STEP: Building a namespace api object, basename container-runtime | |
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-runtime-9911 | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
STEP: create the container | |
STEP: wait for the container to reach Failed | |
STEP: get the container status | |
STEP: the container should be terminated | |
STEP: the termination message should be set | |
Mar 6 03:00:49.253: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- | |
STEP: delete the container | |
[AfterEach] [k8s.io] Container Runtime | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 | |
Mar 6 03:00:49.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
STEP: Destroying namespace "container-runtime-9911" for this suite. | |
•{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":57,"skipped":1110,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]"]} | |
SSSSSSSS | |
------------------------------ | |
[sig-auth] ServiceAccounts | |
should allow opting out of API token automount [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
[BeforeEach] [sig-auth] ServiceAccounts | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 | |
STEP: Creating a kubernetes client | |
Mar 6 03:00:49.271: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 | |
STEP: Building a namespace api object, basename svcaccounts | |
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in svcaccounts-494 | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] should allow opting out of API token automount [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
STEP: getting the auto-created API token | |
Mar 6 03:00:49.920: INFO: created pod pod-service-account-defaultsa | |
Mar 6 03:00:49.920: INFO: pod pod-service-account-defaultsa service account token volume mount: true | |
Mar 6 03:00:49.923: INFO: created pod pod-service-account-mountsa | |
Mar 6 03:00:49.923: INFO: pod pod-service-account-mountsa service account token volume mount: true | |
Mar 6 03:00:49.929: INFO: created pod pod-service-account-nomountsa | |
Mar 6 03:00:49.929: INFO: pod pod-service-account-nomountsa service account token volume mount: false | |
Mar 6 03:00:49.934: INFO: created pod pod-service-account-defaultsa-mountspec | |
Mar 6 03:00:49.934: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true | |
Mar 6 03:00:49.941: INFO: created pod pod-service-account-mountsa-mountspec | |
Mar 6 03:00:49.941: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true | |
Mar 6 03:00:49.945: INFO: created pod pod-service-account-nomountsa-mountspec | |
Mar 6 03:00:49.945: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true | |
Mar 6 03:00:49.950: INFO: created pod pod-service-account-defaultsa-nomountspec | |
Mar 6 03:00:49.950: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false | |
Mar 6 03:00:49.957: INFO: created pod pod-service-account-mountsa-nomountspec | |
Mar 6 03:00:49.957: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false | |
Mar 6 03:00:49.969: INFO: created pod pod-service-account-nomountsa-nomountspec | |
Mar 6 03:00:49.969: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false | |
[AfterEach] [sig-auth] ServiceAccounts | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 | |
Mar 6 03:00:49.969: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
STEP: Destroying namespace "svcaccounts-494" for this suite. | |
•{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":278,"completed":58,"skipped":1118,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]"]} | |
SS | |
------------------------------ | |
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] | |
should deny crd creation [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 | |
STEP: Creating a kubernetes client | |
Mar 6 03:00:49.986: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 | |
STEP: Building a namespace api object, basename webhook | |
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-4813 | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 | |
STEP: Setting up server cert | |
STEP: Create role binding to let webhook read extension-apiserver-authentication | |
STEP: Deploying the webhook pod | |
STEP: Wait for the deployment to be ready | |
Mar 6 03:00:50.410: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set | |
Mar 6 03:00:52.421: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719060450, loc:(*time.Location)(0x7db7bc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719060450, loc:(*time.Location)(0x7db7bc0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719060450, loc:(*time.Location)(0x7db7bc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719060450, loc:(*time.Location)(0x7db7bc0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} | |
Mar 6 03:00:54.423: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719060450, loc:(*time.Location)(0x7db7bc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719060450, loc:(*time.Location)(0x7db7bc0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719060450, loc:(*time.Location)(0x7db7bc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719060450, loc:(*time.Location)(0x7db7bc0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} | |
STEP: Deploying the webhook service | |
STEP: Verifying the service has paired with the endpoint | |
Mar 6 03:00:57.452: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 | |
[It] should deny crd creation [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
STEP: Registering the crd webhook via the AdmissionRegistration API | |
Mar 6 03:01:07.479: INFO: Waiting for webhook configuration to be ready... | |
Mar 6 03:01:17.589: INFO: Waiting for webhook configuration to be ready... | |
Mar 6 03:01:27.689: INFO: Waiting for webhook configuration to be ready... | |
Mar 6 03:01:37.790: INFO: Waiting for webhook configuration to be ready... | |
Mar 6 03:01:47.799: INFO: Waiting for webhook configuration to be ready... | |
Mar 6 03:01:47.799: FAIL: waiting for webhook configuration to be ready | |
Unexpected error: | |
<*errors.errorString | 0xc0000b3950>: { | |
s: "timed out waiting for the condition", | |
} | |
timed out waiting for the condition | |
occurred | |
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 | |
STEP: Collecting events from namespace "webhook-4813". | |
STEP: Found 6 events. | |
Mar 6 03:01:47.804: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for sample-webhook-deployment-5f65f8c764-vkdnx: {default-scheduler } Scheduled: Successfully assigned webhook-4813/sample-webhook-deployment-5f65f8c764-vkdnx to worker02 | |
Mar 6 03:01:47.804: INFO: At 2020-03-06 03:00:50 +0000 UTC - event for sample-webhook-deployment: {deployment-controller } ScalingReplicaSet: Scaled up replica set sample-webhook-deployment-5f65f8c764 to 1 | |
Mar 6 03:01:47.804: INFO: At 2020-03-06 03:00:50 +0000 UTC - event for sample-webhook-deployment-5f65f8c764: {replicaset-controller } SuccessfulCreate: Created pod: sample-webhook-deployment-5f65f8c764-vkdnx | |
Mar 6 03:01:47.804: INFO: At 2020-03-06 03:00:52 +0000 UTC - event for sample-webhook-deployment-5f65f8c764-vkdnx: {kubelet worker02} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.8" already present on machine | |
Mar 6 03:01:47.804: INFO: At 2020-03-06 03:00:52 +0000 UTC - event for sample-webhook-deployment-5f65f8c764-vkdnx: {kubelet worker02} Created: Created container sample-webhook | |
Mar 6 03:01:47.804: INFO: At 2020-03-06 03:00:52 +0000 UTC - event for sample-webhook-deployment-5f65f8c764-vkdnx: {kubelet worker02} Started: Started container sample-webhook | |
Mar 6 03:01:47.807: INFO: POD NODE PHASE GRACE CONDITIONS | |
Mar 6 03:01:47.807: INFO: sample-webhook-deployment-5f65f8c764-vkdnx worker02 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-06 03:00:50 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-06 03:00:52 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-06 03:00:52 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-06 03:00:50 +0000 UTC }] | |
Mar 6 03:01:47.807: INFO: | |
Mar 6 03:01:47.812: INFO: | |
Logging node info for node master01 | |
Mar 6 03:01:47.817: INFO: Node Info: &Node{ObjectMeta:{master01 /api/v1/nodes/master01 aeae8a5b-4e17-4702-bb02-bcfde6cdb12a 8971 0 2020-03-06 02:29:18 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master01 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"76:15:82:0d:8b:ab"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:192.168.1.247 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{42140479488 0} {<nil>} 41152812Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3823214592 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{37926431477 0} {<nil>} 37926431477 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3718356992 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-03-06 02:58:57 +0000 UTC,LastTransitionTime:2020-03-06 02:29:18 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-03-06 02:58:57 +0000 UTC,LastTransitionTime:2020-03-06 02:29:18 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-03-06 02:58:57 +0000 UTC,LastTransitionTime:2020-03-06 02:29:18 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-03-06 02:58:57 +0000 UTC,LastTransitionTime:2020-03-06 02:30:33 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:192.168.1.247,},NodeAddress{Type:Hostname,Address:master01,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:20200220105402131453637367482142,SystemUUID:195205FE-EE72-4794-8EAA-AC554EFDEC9B,BootID:6a3bf627-7476-4f52-84fa-f3eab6d26427,KernelVersion:3.10.0-1062.12.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.5,KubeletVersion:v1.17.3,KubeProxyVersion:v1.17.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[192.168.1.252/library/k8s-keepalived@sha256:3db0032ef2feef675710595681cf9463470af179cd324c6773e831b6649ef785 192.168.1.252/library/k8s-keepalived:1.3.5],SizeBytes:356553439,},ContainerImage{Names:[sonobuoy/systemd-logs@sha256:fadad24a66ddd544987c38811108a73d1a306dd3b5e3f090b207786f2825ffde sonobuoy/systemd-logs:v0.3],SizeBytes:297365055,},ContainerImage{Names:[192.168.1.252/library/etcd-amd64@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216 192.168.1.252/library/etcd-amd64:3.4.3-0],SizeBytes:288426917,},ContainerImage{Names:[192.168.1.252/library/node@sha256:3226b047a7034918a05c986347c5fb4d2cce6d0844f325851bfba586271ee617 192.168.1.252/library/node:v3.12.0],SizeBytes:257501722,},ContainerImage{Names:[192.168.1.252/library/cni@sha256:dc3bc525f1d3b794db1f2a7ceb7d8b84699d13e1431fbc117063f7e2075ff4b5 192.168.1.252/library/cni:v3.12.0],SizeBytes:206678344,},ContainerImage{Names:[192.168.1.252/library/kube-apiserver@sha256:4ee4113bcce32ae436b15364b09a2439dee15f6a19bbcf43a470d3dc879b0c62 192.168.1.252/library/kube-apiserver:v1.17.3],SizeBytes:170986003,},ContainerImage{Names:[192.168.1.252/library/envoy@sha256:b36ee021fc4d285de7861dbaee01e7437ce1d63814ead6ae3e4dfcad4a951b2e 192.168.1.252/library/envoy:v1.12.2],SizeBytes:170487454,},ContainerImage{Names:[192.168.1.252/library/kube-controller-manager@sha256:8f03391c0d22e3da8d22725178efe3c4338e1920504f7f4eb4a1c7f5f40c4c6e 192.168.1.252/library/kube-controller-manager:v1.17.3],SizeBytes:160918035,},ContainerImage{Names:[192.168.1.252/library/dashboard@sha256:4e0d39dae7e089b77fe2bbcef648f89905716db9c1f0884950bfd42d9f446c29 192.168.1.252/library/dashboard:v2.0.0-rc5],SizeBytes:126359420,},ContainerImage{Names:[192.168.1.252/library/kube-proxy@sha256:8dfd672298f8fbcd37731e0f2e45be87199cee6ee1f482ffec766693dcec0ec4 192.168.1.252/library/kube-proxy:v1.17.3],SizeBytes:115964919,},ContainerImage{Names:[192.168.1.252/library/pod2daemon-flexvol@sha256:2bf967507ad1adb749f3484b5d39e7d7b8700c4a0f836e8093dae5c57a585ccf 192.168.1.252/library/pod2daemon-flexvol:v3.12.0],SizeBytes:111122324,},ContainerImage{Names:[192.168.1.252/library/kube-scheduler@sha256:eea535b58da5b2fb66b32c61c6913d014cf85061a39527ad9bca2fa84b53dc1c 192.168.1.252/library/kube-scheduler:v1.17.3],SizeBytes:94435859,},ContainerImage{Names:[sonobuoy/sonobuoy@sha256:73f9cfe546ac6d0d5b94308293484c29c3b02fc341a975cfca97c80dd8728ed7 sonobuoy/sonobuoy:v0.17.2],SizeBytes:84339798,},ContainerImage{Names:[192.168.1.252/library/k8s-haproxy@sha256:f88bca67a2782e7bd88af168d5c11216c32104d12fe9240fac54a1d3196e3f9c 192.168.1.252/library/k8s-haproxy:2.0.0],SizeBytes:73550856,},ContainerImage{Names:[192.168.1.252/library/kube-controllers@sha256:edf14a5bcc663d2b0013b1830469626b7aa27206cbc7715ed83c042890ca5837 192.168.1.252/library/kube-controllers:v3.12.0],SizeBytes:56567983,},ContainerImage{Names:[192.168.1.252/library/typha@sha256:3baf9aef445a3224160748d6f560426eab798d6c65620020b2466e114bf6805f 192.168.1.252/library/typha:v3.12.0],SizeBytes:56034822,},ContainerImage{Names:[192.168.1.252/library/flannel@sha256:bd76b84c74ad70368a2341c2402841b75950df881388e43fc2aca000c546653a 192.168.1.252/library/flannel:v0.11.0-amd64],SizeBytes:52567296,},ContainerImage{Names:[192.168.1.252/library/ctl@sha256:128e4c95cf92a482496d591c43cad2a6a21fab1f0e8a8f13e8503f1324106dc8 192.168.1.252/library/ctl:v3.12.0],SizeBytes:47895826,},ContainerImage{Names:[192.168.1.252/library/coredns@sha256:608ac7ccba5ce41c6941fca13bc67059c1eef927fd968b554b790e21cc92543c 192.168.1.252/library/coredns:1.6.5],SizeBytes:41578211,},ContainerImage{Names:[192.168.1.252/library/metrics-scraper@sha256:e24a74b3b1cdc84d6285d507a12eb06907fd8c457b3e8ae9baa9418eca43efc4 192.168.1.252/library/metrics-scraper:v1.0.3],SizeBytes:40105664,},ContainerImage{Names:[192.168.1.252/library/metrics-server-amd64@sha256:c9c4e95068b51d6b33a9dccc61875df07dc650abbf4ac1a19d58b4628f89288b 192.168.1.252/library/metrics-server-amd64:v0.3.6],SizeBytes:39944451,},ContainerImage{Names:[192.168.1.252/library/contour@sha256:3e10c69dfeaa830b84a50e6b47ce90e0f5a1aa84daf77f7662313077fa9579cf 192.168.1.252/library/contour:v1.1.0],SizeBytes:35721216,},ContainerImage{Names:[192.168.1.252/library/kuard-amd64@sha256:bd17153e9a3319f401acc7a27759243f37d422c06cbbf01cb3e1f54bbbfe14f4 192.168.1.252/library/kuard-amd64:1],SizeBytes:19745911,},ContainerImage{Names:[192.168.1.252/library/pause@sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d 192.168.1.252/library/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} | |
Mar 6 03:01:47.817: INFO: | |
Logging kubelet events for node master01 | |
Mar 6 03:01:47.822: INFO: | |
Logging pods the kubelet thinks is on node master01 | |
Mar 6 03:01:47.841: INFO: kube-controller-manager-master01 started at 2020-03-06 02:30:22 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 03:01:47.841: INFO: Container kube-controller-manager ready: true, restart count 1 | |
Mar 6 03:01:47.841: INFO: kube-scheduler-master01 started at 2020-03-06 02:30:22 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 03:01:47.841: INFO: Container kube-scheduler ready: true, restart count 1 | |
Mar 6 03:01:47.841: INFO: sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-drhpn started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded) | |
Mar 6 03:01:47.841: INFO: Container sonobuoy-worker ready: true, restart count 0 | |
Mar 6 03:01:47.841: INFO: Container systemd-logs ready: true, restart count 0 | |
Mar 6 03:01:47.841: INFO: kube-flannel-ds-amd64-6mbnb started at 2020-03-06 02:30:22 +0000 UTC (1+1 container statuses recorded) | |
Mar 6 03:01:47.841: INFO: Init container install-cni ready: true, restart count 0 | |
Mar 6 03:01:47.841: INFO: Container kube-flannel ready: true, restart count 0 | |
Mar 6 03:01:47.841: INFO: kube-proxy-4j8ft started at 2020-03-06 02:30:22 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 03:01:47.841: INFO: Container kube-proxy ready: true, restart count 0 | |
Mar 6 03:01:47.841: INFO: kube-apiserver-master01 started at 2020-03-06 02:30:22 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 03:01:47.841: INFO: Container kube-apiserver ready: true, restart count 0 | |
W0306 03:01:47.846403 19 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. | |
Mar 6 03:01:47.861: INFO: | |
Latency metrics for node master01 | |
Mar 6 03:01:47.861: INFO: | |
Logging node info for node master02 | |
Mar 6 03:01:47.864: INFO: Node Info: &Node{ObjectMeta:{master02 /api/v1/nodes/master02 6a0ecb6f-ef31-4754-858b-3eba76999224 8958 0 2020-03-06 02:29:17 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master02 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"82:c1:38:99:3b:39"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:192.168.1.248 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{42140479488 0} {<nil>} 41152812Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3823214592 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{37926431477 0} {<nil>} 37926431477 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3718356992 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-03-06 02:58:54 +0000 UTC,LastTransitionTime:2020-03-06 02:29:16 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-03-06 02:58:54 +0000 UTC,LastTransitionTime:2020-03-06 02:29:16 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-03-06 02:58:54 +0000 UTC,LastTransitionTime:2020-03-06 02:29:16 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-03-06 02:58:54 +0000 UTC,LastTransitionTime:2020-03-06 02:30:10 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:192.168.1.248,},NodeAddress{Type:Hostname,Address:master02,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:20200220105402131453637367482142,SystemUUID:8B2C4639-6D22-4D0D-A03C-F6D7E328F9D5,BootID:efd7329f-ae31-4806-ba13-7fdd5fad57df,KernelVersion:3.10.0-1062.12.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.5,KubeletVersion:v1.17.3,KubeProxyVersion:v1.17.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[sonobuoy/systemd-logs@sha256:fadad24a66ddd544987c38811108a73d1a306dd3b5e3f090b207786f2825ffde sonobuoy/systemd-logs:v0.3],SizeBytes:297365055,},ContainerImage{Names:[192.168.1.252/library/etcd-amd64@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216 192.168.1.252/library/etcd-amd64:3.4.3-0],SizeBytes:288426917,},ContainerImage{Names:[192.168.1.252/library/kube-apiserver@sha256:4ee4113bcce32ae436b15364b09a2439dee15f6a19bbcf43a470d3dc879b0c62 192.168.1.252/library/kube-apiserver:v1.17.3],SizeBytes:170986003,},ContainerImage{Names:[192.168.1.252/library/kube-controller-manager@sha256:8f03391c0d22e3da8d22725178efe3c4338e1920504f7f4eb4a1c7f5f40c4c6e 192.168.1.252/library/kube-controller-manager:v1.17.3],SizeBytes:160918035,},ContainerImage{Names:[192.168.1.252/library/kube-proxy@sha256:8dfd672298f8fbcd37731e0f2e45be87199cee6ee1f482ffec766693dcec0ec4 192.168.1.252/library/kube-proxy:v1.17.3],SizeBytes:115964919,},ContainerImage{Names:[192.168.1.252/library/kube-scheduler@sha256:eea535b58da5b2fb66b32c61c6913d014cf85061a39527ad9bca2fa84b53dc1c 192.168.1.252/library/kube-scheduler:v1.17.3],SizeBytes:94435859,},ContainerImage{Names:[sonobuoy/sonobuoy@sha256:73f9cfe546ac6d0d5b94308293484c29c3b02fc341a975cfca97c80dd8728ed7 sonobuoy/sonobuoy:v0.17.2],SizeBytes:84339798,},ContainerImage{Names:[192.168.1.252/library/k8s-haproxy@sha256:f88bca67a2782e7bd88af168d5c11216c32104d12fe9240fac54a1d3196e3f9c 192.168.1.252/library/k8s-haproxy:2.0.0],SizeBytes:73550856,},ContainerImage{Names:[192.168.1.252/library/flannel@sha256:bd76b84c74ad70368a2341c2402841b75950df881388e43fc2aca000c546653a 192.168.1.252/library/flannel:v0.11.0-amd64],SizeBytes:52567296,},ContainerImage{Names:[192.168.1.252/library/coredns@sha256:608ac7ccba5ce41c6941fca13bc67059c1eef927fd968b554b790e21cc92543c 192.168.1.252/library/coredns:1.6.5],SizeBytes:41578211,},ContainerImage{Names:[192.168.1.252/library/pause@sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d 192.168.1.252/library/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} | |
Mar 6 03:01:47.864: INFO: | |
Logging kubelet events for node master02 | |
Mar 6 03:01:47.868: INFO: | |
Logging pods the kubelet thinks is on node master02 | |
Mar 6 03:01:47.878: INFO: kube-apiserver-master02 started at 2020-03-06 02:29:50 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 03:01:47.878: INFO: Container kube-apiserver ready: true, restart count 0 | |
Mar 6 03:01:47.878: INFO: kube-controller-manager-master02 started at 2020-03-06 02:29:50 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 03:01:47.878: INFO: Container kube-controller-manager ready: true, restart count 1 | |
Mar 6 03:01:47.878: INFO: kube-scheduler-master02 started at 2020-03-06 02:29:23 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 03:01:47.878: INFO: Container kube-scheduler ready: true, restart count 1 | |
Mar 6 03:01:47.878: INFO: kube-proxy-scdss started at 2020-03-06 02:30:00 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 03:01:47.878: INFO: Container kube-proxy ready: true, restart count 0 | |
Mar 6 03:01:47.878: INFO: kube-flannel-ds-amd64-vfl78 started at 2020-03-06 02:30:00 +0000 UTC (1+1 container statuses recorded) | |
Mar 6 03:01:47.878: INFO: Init container install-cni ready: true, restart count 0 | |
Mar 6 03:01:47.878: INFO: Container kube-flannel ready: true, restart count 0 | |
Mar 6 03:01:47.878: INFO: coredns-7795996659-phdkc started at 2020-03-06 02:30:13 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 03:01:47.878: INFO: Container coredns ready: true, restart count 0 | |
Mar 6 03:01:47.878: INFO: sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-2zmwm started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded) | |
Mar 6 03:01:47.878: INFO: Container sonobuoy-worker ready: true, restart count 0 | |
Mar 6 03:01:47.878: INFO: Container systemd-logs ready: true, restart count 0 | |
W0306 03:01:47.881083 19 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. | |
Mar 6 03:01:47.902: INFO: | |
Latency metrics for node master02 | |
Mar 6 03:01:47.902: INFO: | |
Logging node info for node master03 | |
Mar 6 03:01:47.903: INFO: Node Info: &Node{ObjectMeta:{master03 /api/v1/nodes/master03 c508ee4c-fe9d-4c73-a857-e57fba26fa86 8959 0 2020-03-06 02:29:17 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master03 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"4a:aa:08:ea:16:90"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:192.168.1.249 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{42140479488 0} {<nil>} 41152812Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3823226880 0} {<nil>} 3733620Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{37926431477 0} {<nil>} 37926431477 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3718369280 0} {<nil>} 3631220Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-03-06 02:58:54 +0000 UTC,LastTransitionTime:2020-03-06 02:29:17 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-03-06 02:58:54 +0000 UTC,LastTransitionTime:2020-03-06 02:29:17 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-03-06 02:58:54 +0000 UTC,LastTransitionTime:2020-03-06 02:29:17 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-03-06 02:58:54 +0000 UTC,LastTransitionTime:2020-03-06 02:30:10 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:192.168.1.249,},NodeAddress{Type:Hostname,Address:master03,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:20200220105402131453637367482142,SystemUUID:0C8F0A67-EB7E-42DE-9418-0973FE735A08,BootID:05b1fa23-e6be-4032-bc93-8800264dff91,KernelVersion:3.10.0-1062.12.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.5,KubeletVersion:v1.17.3,KubeProxyVersion:v1.17.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[sonobuoy/systemd-logs@sha256:fadad24a66ddd544987c38811108a73d1a306dd3b5e3f090b207786f2825ffde sonobuoy/systemd-logs:v0.3],SizeBytes:297365055,},ContainerImage{Names:[192.168.1.252/library/etcd-amd64@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216 192.168.1.252/library/etcd-amd64:3.4.3-0],SizeBytes:288426917,},ContainerImage{Names:[192.168.1.252/library/kube-apiserver@sha256:4ee4113bcce32ae436b15364b09a2439dee15f6a19bbcf43a470d3dc879b0c62 192.168.1.252/library/kube-apiserver:v1.17.3],SizeBytes:170986003,},ContainerImage{Names:[192.168.1.252/library/kube-controller-manager@sha256:8f03391c0d22e3da8d22725178efe3c4338e1920504f7f4eb4a1c7f5f40c4c6e 192.168.1.252/library/kube-controller-manager:v1.17.3],SizeBytes:160918035,},ContainerImage{Names:[192.168.1.252/library/dashboard@sha256:4e0d39dae7e089b77fe2bbcef648f89905716db9c1f0884950bfd42d9f446c29 192.168.1.252/library/dashboard:v2.0.0-rc5],SizeBytes:126359420,},ContainerImage{Names:[192.168.1.252/library/kube-proxy@sha256:8dfd672298f8fbcd37731e0f2e45be87199cee6ee1f482ffec766693dcec0ec4 192.168.1.252/library/kube-proxy:v1.17.3],SizeBytes:115964919,},ContainerImage{Names:[192.168.1.252/library/kube-scheduler@sha256:eea535b58da5b2fb66b32c61c6913d014cf85061a39527ad9bca2fa84b53dc1c 192.168.1.252/library/kube-scheduler:v1.17.3],SizeBytes:94435859,},ContainerImage{Names:[sonobuoy/sonobuoy@sha256:73f9cfe546ac6d0d5b94308293484c29c3b02fc341a975cfca97c80dd8728ed7 sonobuoy/sonobuoy:v0.17.2],SizeBytes:84339798,},ContainerImage{Names:[192.168.1.252/library/k8s-haproxy@sha256:f88bca67a2782e7bd88af168d5c11216c32104d12fe9240fac54a1d3196e3f9c 192.168.1.252/library/k8s-haproxy:2.0.0],SizeBytes:73550856,},ContainerImage{Names:[192.168.1.252/library/flannel@sha256:bd76b84c74ad70368a2341c2402841b75950df881388e43fc2aca000c546653a 192.168.1.252/library/flannel:v0.11.0-amd64],SizeBytes:52567296,},ContainerImage{Names:[192.168.1.252/library/coredns@sha256:608ac7ccba5ce41c6941fca13bc67059c1eef927fd968b554b790e21cc92543c 192.168.1.252/library/coredns:1.6.5],SizeBytes:41578211,},ContainerImage{Names:[192.168.1.252/library/metrics-scraper@sha256:e24a74b3b1cdc84d6285d507a12eb06907fd8c457b3e8ae9baa9418eca43efc4 192.168.1.252/library/metrics-scraper:v1.0.3],SizeBytes:40105664,},ContainerImage{Names:[192.168.1.252/library/pause@sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d 192.168.1.252/library/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} | |
Mar 6 03:01:47.903: INFO: | |
Logging kubelet events for node master03 | |
Mar 6 03:01:47.909: INFO: | |
Logging pods the kubelet thinks is on node master03 | |
Mar 6 03:01:47.922: INFO: kube-apiserver-master03 started at 2020-03-06 02:29:24 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 03:01:47.922: INFO: Container kube-apiserver ready: true, restart count 0 | |
Mar 6 03:01:47.922: INFO: kube-scheduler-master03 started at 2020-03-06 02:29:38 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 03:01:47.922: INFO: Container kube-scheduler ready: true, restart count 1 | |
Mar 6 03:01:47.922: INFO: kube-proxy-stbnn started at 2020-03-06 02:30:00 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 03:01:47.922: INFO: Container kube-proxy ready: true, restart count 0 | |
Mar 6 03:01:47.922: INFO: kubernetes-dashboard-6647798d59-j2ms4 started at 2020-03-06 02:30:10 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 03:01:47.922: INFO: Container kubernetes-dashboard ready: true, restart count 0 | |
Mar 6 03:01:47.922: INFO: coredns-7795996659-cmq4d started at 2020-03-06 02:30:13 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 03:01:47.922: INFO: Container coredns ready: true, restart count 0 | |
Mar 6 03:01:47.922: INFO: sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-w5psq started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded) | |
Mar 6 03:01:47.922: INFO: Container sonobuoy-worker ready: true, restart count 0 | |
Mar 6 03:01:47.922: INFO: Container systemd-logs ready: true, restart count 0 | |
Mar 6 03:01:47.922: INFO: kube-controller-manager-master03 started at 2020-03-06 02:29:50 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 03:01:47.922: INFO: Container kube-controller-manager ready: true, restart count 1 | |
Mar 6 03:01:47.922: INFO: kube-flannel-ds-amd64-hs69k started at 2020-03-06 02:30:00 +0000 UTC (1+1 container statuses recorded) | |
Mar 6 03:01:47.922: INFO: Init container install-cni ready: true, restart count 0 | |
Mar 6 03:01:47.922: INFO: Container kube-flannel ready: true, restart count 0 | |
Mar 6 03:01:47.922: INFO: dashboard-metrics-scraper-56568cb9d7-d57kl started at 2020-03-06 02:30:10 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 03:01:47.922: INFO: Container dashboard-metrics-scraper ready: true, restart count 0 | |
W0306 03:01:47.925426 19 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. | |
Mar 6 03:01:47.944: INFO: | |
Latency metrics for node master03 | |
Mar 6 03:01:47.944: INFO: | |
Logging node info for node worker01 | |
Mar 6 03:01:47.946: INFO: Node Info: &Node{ObjectMeta:{worker01 /api/v1/nodes/worker01 cf4203bb-1bfa-4b35-991f-935275b6bc46 9595 0 2020-03-06 02:30:30 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:worker01 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"5a:49:f5:5b:74:b3"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:192.168.1.250 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{42140479488 0} {<nil>} 41152812Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3823214592 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{37926431477 0} {<nil>} 37926431477 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3718356992 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-03-06 03:01:20 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-03-06 03:01:20 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-03-06 03:01:20 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-03-06 03:01:20 +0000 UTC,LastTransitionTime:2020-03-06 02:30:45 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:192.168.1.250,},NodeAddress{Type:Hostname,Address:worker01,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:20200220105402131453637367482142,SystemUUID:5CA364AA-0FF8-4B57-BA86-F28699575F0D,BootID:c85ad0c4-ebcf-4d01-97f0-a36c1cfc50be,KernelVersion:3.10.0-1062.12.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.5,KubeletVersion:v1.17.3,KubeProxyVersion:v1.17.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[sonobuoy/systemd-logs@sha256:fadad24a66ddd544987c38811108a73d1a306dd3b5e3f090b207786f2825ffde sonobuoy/systemd-logs:v0.3],SizeBytes:297365055,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[192.168.1.252/library/kube-proxy@sha256:8dfd672298f8fbcd37731e0f2e45be87199cee6ee1f482ffec766693dcec0ec4 192.168.1.252/library/kube-proxy:v1.17.3],SizeBytes:115964919,},ContainerImage{Names:[sonobuoy/sonobuoy@sha256:73f9cfe546ac6d0d5b94308293484c29c3b02fc341a975cfca97c80dd8728ed7 sonobuoy/sonobuoy:v0.17.2],SizeBytes:84339798,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5 gcr.io/kubernetes-e2e-test-images/agnhost:2.8],SizeBytes:52800335,},ContainerImage{Names:[192.168.1.252/library/flannel@sha256:bd76b84c74ad70368a2341c2402841b75950df881388e43fc2aca000c546653a 192.168.1.252/library/flannel:v0.11.0-amd64],SizeBytes:52567296,},ContainerImage{Names:[192.168.1.252/library/metrics-server-amd64@sha256:c9c4e95068b51d6b33a9dccc61875df07dc650abbf4ac1a19d58b4628f89288b 192.168.1.252/library/metrics-server-amd64:v0.3.6],SizeBytes:39944451,},ContainerImage{Names:[192.168.1.252/library/kuard-amd64@sha256:bd17153e9a3319f401acc7a27759243f37d422c06cbbf01cb3e1f54bbbfe14f4 192.168.1.252/library/kuard-amd64:1],SizeBytes:19745911,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[192.168.1.252/library/pause@sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d 192.168.1.252/library/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} | |
Mar 6 03:01:47.946: INFO: | |
Logging kubelet events for node worker01 | |
Mar 6 03:01:47.950: INFO: | |
Logging pods the kubelet thinks is on node worker01 | |
Mar 6 03:01:47.960: INFO: envoy-lvmcb started at 2020-03-06 02:30:45 +0000 UTC (1+1 container statuses recorded) | |
Mar 6 03:01:47.960: INFO: Init container envoy-initconfig ready: false, restart count 0 | |
Mar 6 03:01:47.960: INFO: Container envoy ready: false, restart count 0 | |
Mar 6 03:01:47.960: INFO: kuard-678c676f5d-m29b6 started at 2020-03-06 02:30:49 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 03:01:47.960: INFO: Container kuard ready: true, restart count 0 | |
Mar 6 03:01:47.960: INFO: kuard-678c676f5d-tzsnn started at 2020-03-06 02:30:51 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 03:01:47.960: INFO: Container kuard ready: true, restart count 0 | |
Mar 6 03:01:47.960: INFO: contour-54748c65f5-jl5wz started at 2020-03-06 02:30:46 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 03:01:47.960: INFO: Container contour ready: false, restart count 0 | |
Mar 6 03:01:47.960: INFO: metrics-server-78799bf646-xrsnn started at 2020-03-06 02:30:51 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 03:01:47.960: INFO: Container metrics-server ready: true, restart count 0 | |
Mar 6 03:01:47.960: INFO: kube-proxy-kcb8f started at 2020-03-06 02:30:30 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 03:01:47.960: INFO: Container kube-proxy ready: true, restart count 0 | |
Mar 6 03:01:47.960: INFO: contour-certgen-82k46 started at 2020-03-06 02:30:46 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 03:01:47.960: INFO: Container contour ready: false, restart count 0 | |
Mar 6 03:01:47.960: INFO: contour-54748c65f5-gk5sz started at 2020-03-06 02:30:51 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 03:01:47.960: INFO: Container contour ready: false, restart count 0 | |
Mar 6 03:01:47.960: INFO: sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-2bz8g started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded) | |
Mar 6 03:01:47.960: INFO: Container sonobuoy-worker ready: true, restart count 0 | |
Mar 6 03:01:47.960: INFO: Container systemd-logs ready: true, restart count 0 | |
Mar 6 03:01:47.960: INFO: kube-flannel-ds-amd64-xxhz9 started at 2020-03-06 02:30:30 +0000 UTC (1+1 container statuses recorded) | |
Mar 6 03:01:47.960: INFO: Init container install-cni ready: true, restart count 0 | |
Mar 6 03:01:47.960: INFO: Container kube-flannel ready: true, restart count 1 | |
Mar 6 03:01:47.960: INFO: kuard-678c676f5d-vsn86 started at 2020-03-06 02:30:51 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 03:01:47.960: INFO: Container kuard ready: true, restart count 0 | |
W0306 03:01:47.963566 19 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. | |
Mar 6 03:01:47.983: INFO: | |
Latency metrics for node worker01 | |
Mar 6 03:01:47.983: INFO: | |
Logging node info for node worker02 | |
Mar 6 03:01:47.985: INFO: Node Info: &Node{ObjectMeta:{worker02 /api/v1/nodes/worker02 f0994ba1-7e4e-4cc8-b3c8-25d34b25d9ce 9230 0 2020-03-06 02:30:30 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:worker02 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"1a:75:0a:e8:cc:76"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:192.168.1.251 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{42140479488 0} {<nil>} 41152812Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3823214592 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{37926431477 0} {<nil>} 37926431477 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3718356992 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-03-06 03:00:19 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-03-06 03:00:19 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-03-06 03:00:19 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-03-06 03:00:19 +0000 UTC,LastTransitionTime:2020-03-06 02:30:55 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:192.168.1.251,},NodeAddress{Type:Hostname,Address:worker02,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:20200220105402131453637367482142,SystemUUID:EDBF7E33-228B-4233-93CF-7850B5A311E4,BootID:bd6a4f0f-5ddb-4585-83df-253b9292b617,KernelVersion:3.10.0-1062.12.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.5,KubeletVersion:v1.17.3,KubeProxyVersion:v1.17.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/google-containers/conformance@sha256:502434491cbc3fac5d9a606a879e554cf881b2ba5b688bed25f2c33d3ff1c777 gcr.io/google-containers/conformance:v1.17.3],SizeBytes:575831882,},ContainerImage{Names:[sonobuoy/systemd-logs@sha256:fadad24a66ddd544987c38811108a73d1a306dd3b5e3f090b207786f2825ffde sonobuoy/systemd-logs:v0.3],SizeBytes:297365055,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[192.168.1.252/library/kube-proxy@sha256:8dfd672298f8fbcd37731e0f2e45be87199cee6ee1f482ffec766693dcec0ec4 192.168.1.252/library/kube-proxy:v1.17.3],SizeBytes:115964919,},ContainerImage{Names:[sonobuoy/sonobuoy@sha256:73f9cfe546ac6d0d5b94308293484c29c3b02fc341a975cfca97c80dd8728ed7 sonobuoy/sonobuoy:v0.17.2],SizeBytes:84339798,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5 gcr.io/kubernetes-e2e-test-images/agnhost:2.8],SizeBytes:52800335,},ContainerImage{Names:[192.168.1.252/library/flannel@sha256:bd76b84c74ad70368a2341c2402841b75950df881388e43fc2aca000c546653a 192.168.1.252/library/flannel:v0.11.0-amd64],SizeBytes:52567296,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0],SizeBytes:4732240,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0],SizeBytes:1450451,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[192.168.1.252/library/pause@sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea 192.168.1.252/library/pause:3.1 k8s.gcr.io/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} | |
Mar 6 03:01:47.985: INFO: | |
Logging kubelet events for node worker02 | |
Mar 6 03:01:47.989: INFO: | |
Logging pods the kubelet thinks is on node worker02 | |
Mar 6 03:01:48.001: INFO: kube-flannel-ds-amd64-ztfzf started at 2020-03-06 02:30:30 +0000 UTC (1+1 container statuses recorded) | |
Mar 6 03:01:48.001: INFO: Init container install-cni ready: true, restart count 0 | |
Mar 6 03:01:48.001: INFO: Container kube-flannel ready: true, restart count 0 | |
Mar 6 03:01:48.001: INFO: envoy-wgz76 started at 2020-03-06 02:30:55 +0000 UTC (1+1 container statuses recorded) | |
Mar 6 03:01:48.001: INFO: Init container envoy-initconfig ready: false, restart count 0 | |
Mar 6 03:01:48.001: INFO: Container envoy ready: false, restart count 0 | |
Mar 6 03:01:48.001: INFO: sonobuoy-e2e-job-67137ff64ac145d3 started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded) | |
Mar 6 03:01:48.001: INFO: Container e2e ready: true, restart count 0 | |
Mar 6 03:01:48.001: INFO: Container sonobuoy-worker ready: true, restart count 0 | |
Mar 6 03:01:48.001: INFO: sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-bpjtd started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded) | |
Mar 6 03:01:48.001: INFO: Container sonobuoy-worker ready: true, restart count 0 | |
Mar 6 03:01:48.001: INFO: Container systemd-logs ready: true, restart count 0 | |
Mar 6 03:01:48.001: INFO: sample-webhook-deployment-5f65f8c764-vkdnx started at 2020-03-06 03:00:50 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 03:01:48.001: INFO: Container sample-webhook ready: true, restart count 0 | |
Mar 6 03:01:48.001: INFO: kube-proxy-5xxdb started at 2020-03-06 02:30:30 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 03:01:48.001: INFO: Container kube-proxy ready: true, restart count 1 | |
Mar 6 03:01:48.001: INFO: sonobuoy started at 2020-03-06 02:38:02 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 03:01:48.001: INFO: Container kube-sonobuoy ready: true, restart count 0 | |
W0306 03:01:48.004119 19 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. | |
Mar 6 03:01:48.024: INFO: | |
Latency metrics for node worker02 | |
Mar 6 03:01:48.024: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
STEP: Destroying namespace "webhook-4813" for this suite. | |
STEP: Destroying namespace "webhook-4813-markers" for this suite. | |
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 | |
• Failure [58.106 seconds] | |
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 | |
should deny crd creation [Conformance] [It] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
Mar 6 03:01:47.799: waiting for webhook configuration to be ready | |
Unexpected error: | |
<*errors.errorString | 0xc0000b3950>: { | |
s: "timed out waiting for the condition", | |
} | |
timed out waiting for the condition | |
occurred | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:2096 | |
------------------------------ | |
{"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":278,"completed":58,"skipped":1120,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]"]} | |
SSSSSSSS | |
------------------------------ | |
[k8s.io] Variable Expansion | |
should allow substituting values in a container's command [NodeConformance] [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
[BeforeEach] [k8s.io] Variable Expansion | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 | |
STEP: Creating a kubernetes client | |
Mar 6 03:01:48.092: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 | |
STEP: Building a namespace api object, basename var-expansion | |
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in var-expansion-2293 | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] should allow substituting values in a container's command [NodeConformance] [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
STEP: Creating a pod to test substitution in container's command | |
Mar 6 03:01:48.262: INFO: Waiting up to 5m0s for pod "var-expansion-35b166b4-0f87-4f61-aec5-940213d5a5d5" in namespace "var-expansion-2293" to be "success or failure" | |
Mar 6 03:01:48.264: INFO: Pod "var-expansion-35b166b4-0f87-4f61-aec5-940213d5a5d5": Phase="Pending", Reason="", readiness=false. Elapsed: 1.934874ms | |
Mar 6 03:01:50.268: INFO: Pod "var-expansion-35b166b4-0f87-4f61-aec5-940213d5a5d5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005626655s | |
STEP: Saw pod success | |
Mar 6 03:01:50.268: INFO: Pod "var-expansion-35b166b4-0f87-4f61-aec5-940213d5a5d5" satisfied condition "success or failure" | |
Mar 6 03:01:50.269: INFO: Trying to get logs from node worker02 pod var-expansion-35b166b4-0f87-4f61-aec5-940213d5a5d5 container dapi-container: <nil> | |
STEP: delete the pod | |
Mar 6 03:01:50.288: INFO: Waiting for pod var-expansion-35b166b4-0f87-4f61-aec5-940213d5a5d5 to disappear | |
Mar 6 03:01:50.289: INFO: Pod var-expansion-35b166b4-0f87-4f61-aec5-940213d5a5d5 no longer exists | |
[AfterEach] [k8s.io] Variable Expansion | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 | |
Mar 6 03:01:50.289: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
STEP: Destroying namespace "var-expansion-2293" for this suite. | |
•{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":278,"completed":59,"skipped":1128,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]"]} | |
SSSSSSSSSSSSSSSSSSSSSS | |
------------------------------ | |
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] | |
works for CRD preserving unknown fields in an embedded object [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 | |
STEP: Creating a kubernetes client | |
Mar 6 03:01:50.299: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 | |
STEP: Building a namespace api object, basename crd-publish-openapi | |
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in crd-publish-openapi-4672 | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] works for CRD preserving unknown fields in an embedded object [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
Mar 6 03:01:50.441: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 | |
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties | |
Mar 6 03:02:05.754: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 --namespace=crd-publish-openapi-4672 create -f -' | |
Mar 6 03:02:15.970: INFO: stderr: "" | |
Mar 6 03:02:15.970: INFO: stdout: "e2e-test-crd-publish-openapi-563-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" | |
Mar 6 03:02:15.970: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 --namespace=crd-publish-openapi-4672 delete e2e-test-crd-publish-openapi-563-crds test-cr' | |
Mar 6 03:02:31.088: INFO: stderr: "" | |
Mar 6 03:02:31.088: INFO: stdout: "e2e-test-crd-publish-openapi-563-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" | |
Mar 6 03:02:31.088: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 --namespace=crd-publish-openapi-4672 apply -f -' | |
Mar 6 03:02:36.324: INFO: stderr: "" | |
Mar 6 03:02:36.324: INFO: stdout: "e2e-test-crd-publish-openapi-563-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" | |
Mar 6 03:02:36.324: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 --namespace=crd-publish-openapi-4672 delete e2e-test-crd-publish-openapi-563-crds test-cr' | |
Mar 6 03:02:51.404: INFO: stderr: "" | |
Mar 6 03:02:51.404: INFO: stdout: "e2e-test-crd-publish-openapi-563-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" | |
STEP: kubectl explain works to explain CR | |
Mar 6 03:02:51.404: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 explain e2e-test-crd-publish-openapi-563-crds' | |
Mar 6 03:03:06.566: INFO: stderr: "" | |
Mar 6 03:03:06.566: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-563-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t<string>\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t<string>\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t<Object>\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t<map[string]>\n Specification of Waldo\n\n status\t<Object>\n Status of Waldo\n\n" | |
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 | |
Mar 6 03:03:20.863: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
STEP: Destroying namespace "crd-publish-openapi-4672" for this suite. | |
• [SLOW TEST:90.571 seconds] | |
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 | |
works for CRD preserving unknown fields in an embedded object [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
------------------------------ | |
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":278,"completed":60,"skipped":1150,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]"]} | |
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS | |
------------------------------ | |
[sig-network] DNS | |
should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
[BeforeEach] [sig-network] DNS | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 | |
STEP: Creating a kubernetes client | |
Mar 6 03:03:20.870: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 | |
STEP: Building a namespace api object, basename dns | |
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in dns-8913 | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-8913.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-8913.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8913.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done | |
STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-8913.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-8913.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8913.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done | |
STEP: creating a pod to probe /etc/hosts | |
STEP: submitting the pod to kubernetes | |
STEP: retrieving the pod | |
STEP: looking for the results for each expected name from probers | |
Mar 6 03:03:37.038: INFO: DNS probes using dns-8913/dns-test-b6c7cf0f-6f0a-445f-a237-eaef470edaf5 succeeded | |
STEP: deleting the pod | |
[AfterEach] [sig-network] DNS | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 | |
Mar 6 03:03:37.050: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
STEP: Destroying namespace "dns-8913" for this suite. | |
• [SLOW TEST:16.187 seconds] | |
[sig-network] DNS | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 | |
should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
------------------------------ | |
{"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":278,"completed":61,"skipped":1186,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]"]} | |
SSSSSSSSSSSSSSSSS | |
------------------------------ | |
[sig-storage] ConfigMap | |
updates should be reflected in volume [NodeConformance] [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
[BeforeEach] [sig-storage] ConfigMap | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 | |
STEP: Creating a kubernetes client | |
Mar 6 03:03:37.058: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 | |
STEP: Building a namespace api object, basename configmap | |
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-718 | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] updates should be reflected in volume [NodeConformance] [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
STEP: Creating configMap with name configmap-test-upd-5af059d3-5a1b-4f1e-9d82-8fa7e9d5a8d6 | |
STEP: Creating the pod | |
STEP: Updating configmap configmap-test-upd-5af059d3-5a1b-4f1e-9d82-8fa7e9d5a8d6 | |
STEP: waiting to observe update in volume | |
[AfterEach] [sig-storage] ConfigMap | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 | |
Mar 6 03:03:41.262: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
STEP: Destroying namespace "configmap-718" for this suite. | |
•{"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":62,"skipped":1203,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]"]} | |
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS | |
------------------------------ | |
[sig-storage] Projected configMap | |
should be consumable from pods in volume as non-root [NodeConformance] [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
[BeforeEach] [sig-storage] Projected configMap | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 | |
STEP: Creating a kubernetes client | |
Mar 6 03:03:41.270: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 | |
STEP: Building a namespace api object, basename projected | |
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-1660 | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
STEP: Creating configMap with name projected-configmap-test-volume-75b9cdfc-af69-44a5-8bdc-94c195db1c97 | |
STEP: Creating a pod to test consume configMaps | |
Mar 6 03:03:41.417: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-0fe6ae59-8536-4cea-a2d9-5f0e297267d5" in namespace "projected-1660" to be "success or failure" | |
Mar 6 03:03:41.419: INFO: Pod "pod-projected-configmaps-0fe6ae59-8536-4cea-a2d9-5f0e297267d5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029205ms | |
Mar 6 03:03:43.421: INFO: Pod "pod-projected-configmaps-0fe6ae59-8536-4cea-a2d9-5f0e297267d5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.004452255s | |
STEP: Saw pod success | |
Mar 6 03:03:43.421: INFO: Pod "pod-projected-configmaps-0fe6ae59-8536-4cea-a2d9-5f0e297267d5" satisfied condition "success or failure" | |
Mar 6 03:03:43.423: INFO: Trying to get logs from node worker02 pod pod-projected-configmaps-0fe6ae59-8536-4cea-a2d9-5f0e297267d5 container projected-configmap-volume-test: <nil> | |
STEP: delete the pod | |
Mar 6 03:03:43.438: INFO: Waiting for pod pod-projected-configmaps-0fe6ae59-8536-4cea-a2d9-5f0e297267d5 to disappear | |
Mar 6 03:03:43.440: INFO: Pod pod-projected-configmaps-0fe6ae59-8536-4cea-a2d9-5f0e297267d5 no longer exists | |
[AfterEach] [sig-storage] Projected configMap | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 | |
Mar 6 03:03:43.440: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
STEP: Destroying namespace "projected-1660" for this suite. | |
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":278,"completed":63,"skipped":1277,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]"]} | |
SS | |
------------------------------ | |
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] | |
should unconditionally reject operations on fail closed webhook [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 | |
STEP: Creating a kubernetes client | |
Mar 6 03:03:43.446: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 | |
STEP: Building a namespace api object, basename webhook | |
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-7586 | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 | |
STEP: Setting up server cert | |
STEP: Create role binding to let webhook read extension-apiserver-authentication | |
STEP: Deploying the webhook pod | |
STEP: Wait for the deployment to be ready | |
Mar 6 03:03:44.350: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set | |
Mar 6 03:03:46.357: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719060624, loc:(*time.Location)(0x7db7bc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719060624, loc:(*time.Location)(0x7db7bc0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719060624, loc:(*time.Location)(0x7db7bc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719060624, loc:(*time.Location)(0x7db7bc0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} | |
STEP: Deploying the webhook service | |
STEP: Verifying the service has paired with the endpoint | |
Mar 6 03:03:49.372: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 | |
[It] should unconditionally reject operations on fail closed webhook [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API | |
Mar 6 03:03:59.390: INFO: Waiting for webhook configuration to be ready... | |
Mar 6 03:04:09.499: INFO: Waiting for webhook configuration to be ready... | |
Mar 6 03:04:19.599: INFO: Waiting for webhook configuration to be ready... | |
Mar 6 03:04:29.701: INFO: Waiting for webhook configuration to be ready... | |
Mar 6 03:04:39.712: INFO: Waiting for webhook configuration to be ready... | |
Mar 6 03:04:39.712: FAIL: waiting for webhook configuration to be ready | |
Unexpected error: | |
<*errors.errorString | 0xc0000b3950>: { | |
s: "timed out waiting for the condition", | |
} | |
timed out waiting for the condition | |
occurred | |
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 | |
STEP: Collecting events from namespace "webhook-7586". | |
STEP: Found 6 events. | |
Mar 6 03:04:39.715: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for sample-webhook-deployment-5f65f8c764-d65fz: {default-scheduler } Scheduled: Successfully assigned webhook-7586/sample-webhook-deployment-5f65f8c764-d65fz to worker02 | |
Mar 6 03:04:39.715: INFO: At 2020-03-06 03:03:44 +0000 UTC - event for sample-webhook-deployment: {deployment-controller } ScalingReplicaSet: Scaled up replica set sample-webhook-deployment-5f65f8c764 to 1 | |
Mar 6 03:04:39.715: INFO: At 2020-03-06 03:03:44 +0000 UTC - event for sample-webhook-deployment-5f65f8c764: {replicaset-controller } SuccessfulCreate: Created pod: sample-webhook-deployment-5f65f8c764-d65fz | |
Mar 6 03:04:39.715: INFO: At 2020-03-06 03:03:45 +0000 UTC - event for sample-webhook-deployment-5f65f8c764-d65fz: {kubelet worker02} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.8" already present on machine | |
Mar 6 03:04:39.715: INFO: At 2020-03-06 03:03:45 +0000 UTC - event for sample-webhook-deployment-5f65f8c764-d65fz: {kubelet worker02} Created: Created container sample-webhook | |
Mar 6 03:04:39.715: INFO: At 2020-03-06 03:03:45 +0000 UTC - event for sample-webhook-deployment-5f65f8c764-d65fz: {kubelet worker02} Started: Started container sample-webhook | |
Mar 6 03:04:39.717: INFO: POD NODE PHASE GRACE CONDITIONS | |
Mar 6 03:04:39.717: INFO: sample-webhook-deployment-5f65f8c764-d65fz worker02 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-06 03:03:44 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-06 03:03:46 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-06 03:03:46 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-06 03:03:44 +0000 UTC }] | |
Mar 6 03:04:39.717: INFO: | |
Mar 6 03:04:39.721: INFO: | |
Logging node info for node master01 | |
Mar 6 03:04:39.722: INFO: Node Info: &Node{ObjectMeta:{master01 /api/v1/nodes/master01 aeae8a5b-4e17-4702-bb02-bcfde6cdb12a 10316 0 2020-03-06 02:29:18 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master01 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"76:15:82:0d:8b:ab"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:192.168.1.247 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{42140479488 0} {<nil>} 41152812Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3823214592 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{37926431477 0} {<nil>} 37926431477 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3718356992 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-03-06 03:03:58 +0000 UTC,LastTransitionTime:2020-03-06 02:29:18 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-03-06 03:03:58 +0000 UTC,LastTransitionTime:2020-03-06 02:29:18 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-03-06 03:03:58 +0000 UTC,LastTransitionTime:2020-03-06 02:29:18 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-03-06 03:03:58 +0000 UTC,LastTransitionTime:2020-03-06 02:30:33 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:192.168.1.247,},NodeAddress{Type:Hostname,Address:master01,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:20200220105402131453637367482142,SystemUUID:195205FE-EE72-4794-8EAA-AC554EFDEC9B,BootID:6a3bf627-7476-4f52-84fa-f3eab6d26427,KernelVersion:3.10.0-1062.12.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.5,KubeletVersion:v1.17.3,KubeProxyVersion:v1.17.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[192.168.1.252/library/k8s-keepalived@sha256:3db0032ef2feef675710595681cf9463470af179cd324c6773e831b6649ef785 192.168.1.252/library/k8s-keepalived:1.3.5],SizeBytes:356553439,},ContainerImage{Names:[sonobuoy/systemd-logs@sha256:fadad24a66ddd544987c38811108a73d1a306dd3b5e3f090b207786f2825ffde sonobuoy/systemd-logs:v0.3],SizeBytes:297365055,},ContainerImage{Names:[192.168.1.252/library/etcd-amd64@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216 192.168.1.252/library/etcd-amd64:3.4.3-0],SizeBytes:288426917,},ContainerImage{Names:[192.168.1.252/library/node@sha256:3226b047a7034918a05c986347c5fb4d2cce6d0844f325851bfba586271ee617 192.168.1.252/library/node:v3.12.0],SizeBytes:257501722,},ContainerImage{Names:[192.168.1.252/library/cni@sha256:dc3bc525f1d3b794db1f2a7ceb7d8b84699d13e1431fbc117063f7e2075ff4b5 192.168.1.252/library/cni:v3.12.0],SizeBytes:206678344,},ContainerImage{Names:[192.168.1.252/library/kube-apiserver@sha256:4ee4113bcce32ae436b15364b09a2439dee15f6a19bbcf43a470d3dc879b0c62 192.168.1.252/library/kube-apiserver:v1.17.3],SizeBytes:170986003,},ContainerImage{Names:[192.168.1.252/library/envoy@sha256:b36ee021fc4d285de7861dbaee01e7437ce1d63814ead6ae3e4dfcad4a951b2e 192.168.1.252/library/envoy:v1.12.2],SizeBytes:170487454,},ContainerImage{Names:[192.168.1.252/library/kube-controller-manager@sha256:8f03391c0d22e3da8d22725178efe3c4338e1920504f7f4eb4a1c7f5f40c4c6e 192.168.1.252/library/kube-controller-manager:v1.17.3],SizeBytes:160918035,},ContainerImage{Names:[192.168.1.252/library/dashboard@sha256:4e0d39dae7e089b77fe2bbcef648f89905716db9c1f0884950bfd42d9f446c29 192.168.1.252/library/dashboard:v2.0.0-rc5],SizeBytes:126359420,},ContainerImage{Names:[192.168.1.252/library/kube-proxy@sha256:8dfd672298f8fbcd37731e0f2e45be87199cee6ee1f482ffec766693dcec0ec4 192.168.1.252/library/kube-proxy:v1.17.3],SizeBytes:115964919,},ContainerImage{Names:[192.168.1.252/library/pod2daemon-flexvol@sha256:2bf967507ad1adb749f3484b5d39e7d7b8700c4a0f836e8093dae5c57a585ccf 192.168.1.252/library/pod2daemon-flexvol:v3.12.0],SizeBytes:111122324,},ContainerImage{Names:[192.168.1.252/library/kube-scheduler@sha256:eea535b58da5b2fb66b32c61c6913d014cf85061a39527ad9bca2fa84b53dc1c 192.168.1.252/library/kube-scheduler:v1.17.3],SizeBytes:94435859,},ContainerImage{Names:[sonobuoy/sonobuoy@sha256:73f9cfe546ac6d0d5b94308293484c29c3b02fc341a975cfca97c80dd8728ed7 sonobuoy/sonobuoy:v0.17.2],SizeBytes:84339798,},ContainerImage{Names:[192.168.1.252/library/k8s-haproxy@sha256:f88bca67a2782e7bd88af168d5c11216c32104d12fe9240fac54a1d3196e3f9c 192.168.1.252/library/k8s-haproxy:2.0.0],SizeBytes:73550856,},ContainerImage{Names:[192.168.1.252/library/kube-controllers@sha256:edf14a5bcc663d2b0013b1830469626b7aa27206cbc7715ed83c042890ca5837 192.168.1.252/library/kube-controllers:v3.12.0],SizeBytes:56567983,},ContainerImage{Names:[192.168.1.252/library/typha@sha256:3baf9aef445a3224160748d6f560426eab798d6c65620020b2466e114bf6805f 192.168.1.252/library/typha:v3.12.0],SizeBytes:56034822,},ContainerImage{Names:[192.168.1.252/library/flannel@sha256:bd76b84c74ad70368a2341c2402841b75950df881388e43fc2aca000c546653a 192.168.1.252/library/flannel:v0.11.0-amd64],SizeBytes:52567296,},ContainerImage{Names:[192.168.1.252/library/ctl@sha256:128e4c95cf92a482496d591c43cad2a6a21fab1f0e8a8f13e8503f1324106dc8 192.168.1.252/library/ctl:v3.12.0],SizeBytes:47895826,},ContainerImage{Names:[192.168.1.252/library/coredns@sha256:608ac7ccba5ce41c6941fca13bc67059c1eef927fd968b554b790e21cc92543c 192.168.1.252/library/coredns:1.6.5],SizeBytes:41578211,},ContainerImage{Names:[192.168.1.252/library/metrics-scraper@sha256:e24a74b3b1cdc84d6285d507a12eb06907fd8c457b3e8ae9baa9418eca43efc4 192.168.1.252/library/metrics-scraper:v1.0.3],SizeBytes:40105664,},ContainerImage{Names:[192.168.1.252/library/metrics-server-amd64@sha256:c9c4e95068b51d6b33a9dccc61875df07dc650abbf4ac1a19d58b4628f89288b 192.168.1.252/library/metrics-server-amd64:v0.3.6],SizeBytes:39944451,},ContainerImage{Names:[192.168.1.252/library/contour@sha256:3e10c69dfeaa830b84a50e6b47ce90e0f5a1aa84daf77f7662313077fa9579cf 192.168.1.252/library/contour:v1.1.0],SizeBytes:35721216,},ContainerImage{Names:[192.168.1.252/library/kuard-amd64@sha256:bd17153e9a3319f401acc7a27759243f37d422c06cbbf01cb3e1f54bbbfe14f4 192.168.1.252/library/kuard-amd64:1],SizeBytes:19745911,},ContainerImage{Names:[192.168.1.252/library/pause@sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d 192.168.1.252/library/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} | |
Mar 6 03:04:39.723: INFO: | |
Logging kubelet events for node master01 | |
Mar 6 03:04:39.726: INFO: | |
Logging pods the kubelet thinks is on node master01 | |
Mar 6 03:04:39.737: INFO: kube-flannel-ds-amd64-6mbnb started at 2020-03-06 02:30:22 +0000 UTC (1+1 container statuses recorded) | |
Mar 6 03:04:39.737: INFO: Init container install-cni ready: true, restart count 0 | |
Mar 6 03:04:39.737: INFO: Container kube-flannel ready: true, restart count 0 | |
Mar 6 03:04:39.737: INFO: kube-proxy-4j8ft started at 2020-03-06 02:30:22 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 03:04:39.737: INFO: Container kube-proxy ready: true, restart count 0 | |
Mar 6 03:04:39.737: INFO: kube-apiserver-master01 started at 2020-03-06 02:30:22 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 03:04:39.737: INFO: Container kube-apiserver ready: true, restart count 0 | |
Mar 6 03:04:39.737: INFO: kube-controller-manager-master01 started at 2020-03-06 02:30:22 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 03:04:39.737: INFO: Container kube-controller-manager ready: true, restart count 1 | |
Mar 6 03:04:39.737: INFO: kube-scheduler-master01 started at 2020-03-06 02:30:22 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 03:04:39.737: INFO: Container kube-scheduler ready: true, restart count 1 | |
Mar 6 03:04:39.737: INFO: sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-drhpn started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded) | |
Mar 6 03:04:39.737: INFO: Container sonobuoy-worker ready: true, restart count 0 | |
Mar 6 03:04:39.737: INFO: Container systemd-logs ready: true, restart count 0 | |
W0306 03:04:39.740027 19 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. | |
Mar 6 03:04:39.753: INFO: | |
Latency metrics for node master01 | |
Mar 6 03:04:39.753: INFO: | |
Logging node info for node master02 | |
Mar 6 03:04:39.755: INFO: Node Info: &Node{ObjectMeta:{master02 /api/v1/nodes/master02 6a0ecb6f-ef31-4754-858b-3eba76999224 10302 0 2020-03-06 02:29:17 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master02 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"82:c1:38:99:3b:39"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:192.168.1.248 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{42140479488 0} {<nil>} 41152812Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3823214592 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{37926431477 0} {<nil>} 37926431477 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3718356992 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-03-06 03:03:55 +0000 UTC,LastTransitionTime:2020-03-06 02:29:16 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-03-06 03:03:55 +0000 UTC,LastTransitionTime:2020-03-06 02:29:16 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-03-06 03:03:55 +0000 UTC,LastTransitionTime:2020-03-06 02:29:16 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-03-06 03:03:55 +0000 UTC,LastTransitionTime:2020-03-06 02:30:10 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:192.168.1.248,},NodeAddress{Type:Hostname,Address:master02,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:20200220105402131453637367482142,SystemUUID:8B2C4639-6D22-4D0D-A03C-F6D7E328F9D5,BootID:efd7329f-ae31-4806-ba13-7fdd5fad57df,KernelVersion:3.10.0-1062.12.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.5,KubeletVersion:v1.17.3,KubeProxyVersion:v1.17.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[sonobuoy/systemd-logs@sha256:fadad24a66ddd544987c38811108a73d1a306dd3b5e3f090b207786f2825ffde sonobuoy/systemd-logs:v0.3],SizeBytes:297365055,},ContainerImage{Names:[192.168.1.252/library/etcd-amd64@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216 192.168.1.252/library/etcd-amd64:3.4.3-0],SizeBytes:288426917,},ContainerImage{Names:[192.168.1.252/library/kube-apiserver@sha256:4ee4113bcce32ae436b15364b09a2439dee15f6a19bbcf43a470d3dc879b0c62 192.168.1.252/library/kube-apiserver:v1.17.3],SizeBytes:170986003,},ContainerImage{Names:[192.168.1.252/library/kube-controller-manager@sha256:8f03391c0d22e3da8d22725178efe3c4338e1920504f7f4eb4a1c7f5f40c4c6e 192.168.1.252/library/kube-controller-manager:v1.17.3],SizeBytes:160918035,},ContainerImage{Names:[192.168.1.252/library/kube-proxy@sha256:8dfd672298f8fbcd37731e0f2e45be87199cee6ee1f482ffec766693dcec0ec4 192.168.1.252/library/kube-proxy:v1.17.3],SizeBytes:115964919,},ContainerImage{Names:[192.168.1.252/library/kube-scheduler@sha256:eea535b58da5b2fb66b32c61c6913d014cf85061a39527ad9bca2fa84b53dc1c 192.168.1.252/library/kube-scheduler:v1.17.3],SizeBytes:94435859,},ContainerImage{Names:[sonobuoy/sonobuoy@sha256:73f9cfe546ac6d0d5b94308293484c29c3b02fc341a975cfca97c80dd8728ed7 sonobuoy/sonobuoy:v0.17.2],SizeBytes:84339798,},ContainerImage{Names:[192.168.1.252/library/k8s-haproxy@sha256:f88bca67a2782e7bd88af168d5c11216c32104d12fe9240fac54a1d3196e3f9c 192.168.1.252/library/k8s-haproxy:2.0.0],SizeBytes:73550856,},ContainerImage{Names:[192.168.1.252/library/flannel@sha256:bd76b84c74ad70368a2341c2402841b75950df881388e43fc2aca000c546653a 192.168.1.252/library/flannel:v0.11.0-amd64],SizeBytes:52567296,},ContainerImage{Names:[192.168.1.252/library/coredns@sha256:608ac7ccba5ce41c6941fca13bc67059c1eef927fd968b554b790e21cc92543c 192.168.1.252/library/coredns:1.6.5],SizeBytes:41578211,},ContainerImage{Names:[192.168.1.252/library/pause@sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d 192.168.1.252/library/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} | |
Mar 6 03:04:39.755: INFO: | |
Logging kubelet events for node master02 | |
Mar 6 03:04:39.759: INFO: | |
Logging pods the kubelet thinks is on node master02 | |
Mar 6 03:04:39.768: INFO: coredns-7795996659-phdkc started at 2020-03-06 02:30:13 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 03:04:39.768: INFO: Container coredns ready: true, restart count 0 | |
Mar 6 03:04:39.768: INFO: sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-2zmwm started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded) | |
Mar 6 03:04:39.768: INFO: Container sonobuoy-worker ready: true, restart count 0 | |
Mar 6 03:04:39.768: INFO: Container systemd-logs ready: true, restart count 0 | |
Mar 6 03:04:39.768: INFO: kube-apiserver-master02 started at 2020-03-06 02:29:50 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 03:04:39.768: INFO: Container kube-apiserver ready: true, restart count 0 | |
Mar 6 03:04:39.768: INFO: kube-controller-manager-master02 started at 2020-03-06 02:29:50 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 03:04:39.768: INFO: Container kube-controller-manager ready: true, restart count 1 | |
Mar 6 03:04:39.768: INFO: kube-scheduler-master02 started at 2020-03-06 02:29:23 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 03:04:39.768: INFO: Container kube-scheduler ready: true, restart count 1 | |
Mar 6 03:04:39.768: INFO: kube-proxy-scdss started at 2020-03-06 02:30:00 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 03:04:39.768: INFO: Container kube-proxy ready: true, restart count 0 | |
Mar 6 03:04:39.768: INFO: kube-flannel-ds-amd64-vfl78 started at 2020-03-06 02:30:00 +0000 UTC (1+1 container statuses recorded) | |
Mar 6 03:04:39.768: INFO: Init container install-cni ready: true, restart count 0 | |
Mar 6 03:04:39.768: INFO: Container kube-flannel ready: true, restart count 0 | |
W0306 03:04:39.771331 19 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. | |
Mar 6 03:04:39.787: INFO: | |
Latency metrics for node master02 | |
Mar 6 03:04:39.787: INFO: | |
Logging node info for node master03 | |
Mar 6 03:04:39.789: INFO: Node Info: &Node{ObjectMeta:{master03 /api/v1/nodes/master03 c508ee4c-fe9d-4c73-a857-e57fba26fa86 10303 0 2020-03-06 02:29:17 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master03 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"4a:aa:08:ea:16:90"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:192.168.1.249 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{42140479488 0} {<nil>} 41152812Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3823226880 0} {<nil>} 3733620Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{37926431477 0} {<nil>} 37926431477 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3718369280 0} {<nil>} 3631220Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-03-06 03:03:55 +0000 UTC,LastTransitionTime:2020-03-06 02:29:17 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-03-06 03:03:55 +0000 UTC,LastTransitionTime:2020-03-06 02:29:17 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-03-06 03:03:55 +0000 UTC,LastTransitionTime:2020-03-06 02:29:17 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-03-06 03:03:55 +0000 UTC,LastTransitionTime:2020-03-06 02:30:10 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:192.168.1.249,},NodeAddress{Type:Hostname,Address:master03,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:20200220105402131453637367482142,SystemUUID:0C8F0A67-EB7E-42DE-9418-0973FE735A08,BootID:05b1fa23-e6be-4032-bc93-8800264dff91,KernelVersion:3.10.0-1062.12.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.5,KubeletVersion:v1.17.3,KubeProxyVersion:v1.17.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[sonobuoy/systemd-logs@sha256:fadad24a66ddd544987c38811108a73d1a306dd3b5e3f090b207786f2825ffde sonobuoy/systemd-logs:v0.3],SizeBytes:297365055,},ContainerImage{Names:[192.168.1.252/library/etcd-amd64@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216 192.168.1.252/library/etcd-amd64:3.4.3-0],SizeBytes:288426917,},ContainerImage{Names:[192.168.1.252/library/kube-apiserver@sha256:4ee4113bcce32ae436b15364b09a2439dee15f6a19bbcf43a470d3dc879b0c62 192.168.1.252/library/kube-apiserver:v1.17.3],SizeBytes:170986003,},ContainerImage{Names:[192.168.1.252/library/kube-controller-manager@sha256:8f03391c0d22e3da8d22725178efe3c4338e1920504f7f4eb4a1c7f5f40c4c6e 192.168.1.252/library/kube-controller-manager:v1.17.3],SizeBytes:160918035,},ContainerImage{Names:[192.168.1.252/library/dashboard@sha256:4e0d39dae7e089b77fe2bbcef648f89905716db9c1f0884950bfd42d9f446c29 192.168.1.252/library/dashboard:v2.0.0-rc5],SizeBytes:126359420,},ContainerImage{Names:[192.168.1.252/library/kube-proxy@sha256:8dfd672298f8fbcd37731e0f2e45be87199cee6ee1f482ffec766693dcec0ec4 192.168.1.252/library/kube-proxy:v1.17.3],SizeBytes:115964919,},ContainerImage{Names:[192.168.1.252/library/kube-scheduler@sha256:eea535b58da5b2fb66b32c61c6913d014cf85061a39527ad9bca2fa84b53dc1c 192.168.1.252/library/kube-scheduler:v1.17.3],SizeBytes:94435859,},ContainerImage{Names:[sonobuoy/sonobuoy@sha256:73f9cfe546ac6d0d5b94308293484c29c3b02fc341a975cfca97c80dd8728ed7 sonobuoy/sonobuoy:v0.17.2],SizeBytes:84339798,},ContainerImage{Names:[192.168.1.252/library/k8s-haproxy@sha256:f88bca67a2782e7bd88af168d5c11216c32104d12fe9240fac54a1d3196e3f9c 192.168.1.252/library/k8s-haproxy:2.0.0],SizeBytes:73550856,},ContainerImage{Names:[192.168.1.252/library/flannel@sha256:bd76b84c74ad70368a2341c2402841b75950df881388e43fc2aca000c546653a 192.168.1.252/library/flannel:v0.11.0-amd64],SizeBytes:52567296,},ContainerImage{Names:[192.168.1.252/library/coredns@sha256:608ac7ccba5ce41c6941fca13bc67059c1eef927fd968b554b790e21cc92543c 192.168.1.252/library/coredns:1.6.5],SizeBytes:41578211,},ContainerImage{Names:[192.168.1.252/library/metrics-scraper@sha256:e24a74b3b1cdc84d6285d507a12eb06907fd8c457b3e8ae9baa9418eca43efc4 192.168.1.252/library/metrics-scraper:v1.0.3],SizeBytes:40105664,},ContainerImage{Names:[192.168.1.252/library/pause@sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d 192.168.1.252/library/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} | |
Mar 6 03:04:39.789: INFO: | |
Logging kubelet events for node master03 | |
Mar 6 03:04:39.793: INFO: | |
Logging pods the kubelet thinks is on node master03 | |
Mar 6 03:04:39.803: INFO: kubernetes-dashboard-6647798d59-j2ms4 started at 2020-03-06 02:30:10 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 03:04:39.803: INFO: Container kubernetes-dashboard ready: true, restart count 0 | |
Mar 6 03:04:39.803: INFO: coredns-7795996659-cmq4d started at 2020-03-06 02:30:13 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 03:04:39.803: INFO: Container coredns ready: true, restart count 0 | |
Mar 6 03:04:39.803: INFO: sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-w5psq started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded) | |
Mar 6 03:04:39.803: INFO: Container sonobuoy-worker ready: true, restart count 0 | |
Mar 6 03:04:39.803: INFO: Container systemd-logs ready: true, restart count 0 | |
Mar 6 03:04:39.803: INFO: kube-apiserver-master03 started at 2020-03-06 02:29:24 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 03:04:39.803: INFO: Container kube-apiserver ready: true, restart count 0 | |
Mar 6 03:04:39.803: INFO: kube-scheduler-master03 started at 2020-03-06 02:29:38 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 03:04:39.803: INFO: Container kube-scheduler ready: true, restart count 1 | |
Mar 6 03:04:39.803: INFO: kube-proxy-stbnn started at 2020-03-06 02:30:00 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 03:04:39.803: INFO: Container kube-proxy ready: true, restart count 0 | |
Mar 6 03:04:39.803: INFO: kube-controller-manager-master03 started at 2020-03-06 02:29:50 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 03:04:39.803: INFO: Container kube-controller-manager ready: true, restart count 1 | |
Mar 6 03:04:39.803: INFO: kube-flannel-ds-amd64-hs69k started at 2020-03-06 02:30:00 +0000 UTC (1+1 container statuses recorded) | |
Mar 6 03:04:39.803: INFO: Init container install-cni ready: true, restart count 0 | |
Mar 6 03:04:39.803: INFO: Container kube-flannel ready: true, restart count 0 | |
Mar 6 03:04:39.803: INFO: dashboard-metrics-scraper-56568cb9d7-d57kl started at 2020-03-06 02:30:10 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 03:04:39.803: INFO: Container dashboard-metrics-scraper ready: true, restart count 0 | |
W0306 03:04:39.806149 19 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. | |
Mar 6 03:04:39.825: INFO: | |
Latency metrics for node master03 | |
Mar 6 03:04:39.825: INFO: | |
Logging node info for node worker01 | |
Mar 6 03:04:39.827: INFO: Node Info: &Node{ObjectMeta:{worker01 /api/v1/nodes/worker01 cf4203bb-1bfa-4b35-991f-935275b6bc46 9595 0 2020-03-06 02:30:30 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:worker01 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"5a:49:f5:5b:74:b3"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:192.168.1.250 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{42140479488 0} {<nil>} 41152812Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3823214592 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{37926431477 0} {<nil>} 37926431477 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3718356992 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-03-06 03:01:20 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-03-06 03:01:20 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-03-06 03:01:20 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-03-06 03:01:20 +0000 UTC,LastTransitionTime:2020-03-06 02:30:45 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:192.168.1.250,},NodeAddress{Type:Hostname,Address:worker01,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:20200220105402131453637367482142,SystemUUID:5CA364AA-0FF8-4B57-BA86-F28699575F0D,BootID:c85ad0c4-ebcf-4d01-97f0-a36c1cfc50be,KernelVersion:3.10.0-1062.12.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.5,KubeletVersion:v1.17.3,KubeProxyVersion:v1.17.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[sonobuoy/systemd-logs@sha256:fadad24a66ddd544987c38811108a73d1a306dd3b5e3f090b207786f2825ffde sonobuoy/systemd-logs:v0.3],SizeBytes:297365055,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[192.168.1.252/library/kube-proxy@sha256:8dfd672298f8fbcd37731e0f2e45be87199cee6ee1f482ffec766693dcec0ec4 192.168.1.252/library/kube-proxy:v1.17.3],SizeBytes:115964919,},ContainerImage{Names:[sonobuoy/sonobuoy@sha256:73f9cfe546ac6d0d5b94308293484c29c3b02fc341a975cfca97c80dd8728ed7 sonobuoy/sonobuoy:v0.17.2],SizeBytes:84339798,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5 gcr.io/kubernetes-e2e-test-images/agnhost:2.8],SizeBytes:52800335,},ContainerImage{Names:[192.168.1.252/library/flannel@sha256:bd76b84c74ad70368a2341c2402841b75950df881388e43fc2aca000c546653a 192.168.1.252/library/flannel:v0.11.0-amd64],SizeBytes:52567296,},ContainerImage{Names:[192.168.1.252/library/metrics-server-amd64@sha256:c9c4e95068b51d6b33a9dccc61875df07dc650abbf4ac1a19d58b4628f89288b 192.168.1.252/library/metrics-server-amd64:v0.3.6],SizeBytes:39944451,},ContainerImage{Names:[192.168.1.252/library/kuard-amd64@sha256:bd17153e9a3319f401acc7a27759243f37d422c06cbbf01cb3e1f54bbbfe14f4 192.168.1.252/library/kuard-amd64:1],SizeBytes:19745911,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[192.168.1.252/library/pause@sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d 192.168.1.252/library/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} | |
Mar 6 03:04:39.827: INFO: | |
Logging kubelet events for node worker01 | |
Mar 6 03:04:39.831: INFO: | |
Logging pods the kubelet thinks is on node worker01 | |
Mar 6 03:04:39.843: INFO: contour-54748c65f5-jl5wz started at 2020-03-06 02:30:46 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 03:04:39.843: INFO: Container contour ready: false, restart count 0 | |
Mar 6 03:04:39.843: INFO: metrics-server-78799bf646-xrsnn started at 2020-03-06 02:30:51 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 03:04:39.843: INFO: Container metrics-server ready: true, restart count 0 | |
Mar 6 03:04:39.843: INFO: kube-proxy-kcb8f started at 2020-03-06 02:30:30 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 03:04:39.843: INFO: Container kube-proxy ready: true, restart count 0 | |
Mar 6 03:04:39.843: INFO: contour-certgen-82k46 started at 2020-03-06 02:30:46 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 03:04:39.843: INFO: Container contour ready: false, restart count 0 | |
Mar 6 03:04:39.843: INFO: contour-54748c65f5-gk5sz started at 2020-03-06 02:30:51 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 03:04:39.843: INFO: Container contour ready: false, restart count 0 | |
Mar 6 03:04:39.843: INFO: sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-2bz8g started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded) | |
Mar 6 03:04:39.843: INFO: Container sonobuoy-worker ready: true, restart count 0 | |
Mar 6 03:04:39.843: INFO: Container systemd-logs ready: true, restart count 0 | |
Mar 6 03:04:39.843: INFO: kube-flannel-ds-amd64-xxhz9 started at 2020-03-06 02:30:30 +0000 UTC (1+1 container statuses recorded) | |
Mar 6 03:04:39.843: INFO: Init container install-cni ready: true, restart count 0 | |
Mar 6 03:04:39.843: INFO: Container kube-flannel ready: true, restart count 1 | |
Mar 6 03:04:39.843: INFO: kuard-678c676f5d-vsn86 started at 2020-03-06 02:30:51 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 03:04:39.843: INFO: Container kuard ready: true, restart count 0 | |
Mar 6 03:04:39.843: INFO: envoy-lvmcb started at 2020-03-06 02:30:45 +0000 UTC (1+1 container statuses recorded) | |
Mar 6 03:04:39.843: INFO: Init container envoy-initconfig ready: false, restart count 0 | |
Mar 6 03:04:39.843: INFO: Container envoy ready: false, restart count 0 | |
Mar 6 03:04:39.843: INFO: kuard-678c676f5d-m29b6 started at 2020-03-06 02:30:49 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 03:04:39.843: INFO: Container kuard ready: true, restart count 0 | |
Mar 6 03:04:39.843: INFO: kuard-678c676f5d-tzsnn started at 2020-03-06 02:30:51 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 03:04:39.843: INFO: Container kuard ready: true, restart count 0 | |
W0306 03:04:39.846300 19 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. | |
Mar 6 03:04:39.862: INFO: | |
Latency metrics for node worker01 | |
Mar 6 03:04:39.862: INFO: | |
Logging node info for node worker02 | |
Mar 6 03:04:39.864: INFO: Node Info: &Node{ObjectMeta:{worker02 /api/v1/nodes/worker02 f0994ba1-7e4e-4cc8-b3c8-25d34b25d9ce 10264 0 2020-03-06 02:30:30 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:worker02 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"1a:75:0a:e8:cc:76"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:192.168.1.251 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{42140479488 0} {<nil>} 41152812Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3823214592 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{37926431477 0} {<nil>} 37926431477 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3718356992 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-03-06 03:03:50 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-03-06 03:03:50 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-03-06 03:03:50 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-03-06 03:03:50 +0000 UTC,LastTransitionTime:2020-03-06 02:30:55 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:192.168.1.251,},NodeAddress{Type:Hostname,Address:worker02,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:20200220105402131453637367482142,SystemUUID:EDBF7E33-228B-4233-93CF-7850B5A311E4,BootID:bd6a4f0f-5ddb-4585-83df-253b9292b617,KernelVersion:3.10.0-1062.12.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.5,KubeletVersion:v1.17.3,KubeProxyVersion:v1.17.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/google-containers/conformance@sha256:502434491cbc3fac5d9a606a879e554cf881b2ba5b688bed25f2c33d3ff1c777 gcr.io/google-containers/conformance:v1.17.3],SizeBytes:575831882,},ContainerImage{Names:[sonobuoy/systemd-logs@sha256:fadad24a66ddd544987c38811108a73d1a306dd3b5e3f090b207786f2825ffde sonobuoy/systemd-logs:v0.3],SizeBytes:297365055,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:195659796,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[192.168.1.252/library/kube-proxy@sha256:8dfd672298f8fbcd37731e0f2e45be87199cee6ee1f482ffec766693dcec0ec4 192.168.1.252/library/kube-proxy:v1.17.3],SizeBytes:115964919,},ContainerImage{Names:[sonobuoy/sonobuoy@sha256:73f9cfe546ac6d0d5b94308293484c29c3b02fc341a975cfca97c80dd8728ed7 sonobuoy/sonobuoy:v0.17.2],SizeBytes:84339798,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5 gcr.io/kubernetes-e2e-test-images/agnhost:2.8],SizeBytes:52800335,},ContainerImage{Names:[192.168.1.252/library/flannel@sha256:bd76b84c74ad70368a2341c2402841b75950df881388e43fc2aca000c546653a 192.168.1.252/library/flannel:v0.11.0-amd64],SizeBytes:52567296,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd gcr.io/kubernetes-e2e-test-images/dnsutils:1.1],SizeBytes:9349974,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0],SizeBytes:4732240,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0],SizeBytes:1450451,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[192.168.1.252/library/pause@sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea 192.168.1.252/library/pause:3.1 k8s.gcr.io/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} | |
Mar 6 03:04:39.864: INFO: | |
Logging kubelet events for node worker02 | |
Mar 6 03:04:39.868: INFO: | |
Logging pods the kubelet thinks is on node worker02 | |
Mar 6 03:04:39.872: INFO: kube-flannel-ds-amd64-ztfzf started at 2020-03-06 02:30:30 +0000 UTC (1+1 container statuses recorded) | |
Mar 6 03:04:39.872: INFO: Init container install-cni ready: true, restart count 0 | |
Mar 6 03:04:39.872: INFO: Container kube-flannel ready: true, restart count 0 | |
Mar 6 03:04:39.872: INFO: envoy-wgz76 started at 2020-03-06 02:30:55 +0000 UTC (1+1 container statuses recorded) | |
Mar 6 03:04:39.872: INFO: Init container envoy-initconfig ready: false, restart count 0 | |
Mar 6 03:04:39.872: INFO: Container envoy ready: false, restart count 0 | |
Mar 6 03:04:39.872: INFO: sonobuoy-e2e-job-67137ff64ac145d3 started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded) | |
Mar 6 03:04:39.872: INFO: Container e2e ready: true, restart count 0 | |
Mar 6 03:04:39.872: INFO: Container sonobuoy-worker ready: true, restart count 0 | |
Mar 6 03:04:39.872: INFO: sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-bpjtd started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded) | |
Mar 6 03:04:39.872: INFO: Container sonobuoy-worker ready: true, restart count 0 | |
Mar 6 03:04:39.872: INFO: Container systemd-logs ready: true, restart count 0 | |
Mar 6 03:04:39.872: INFO: kube-proxy-5xxdb started at 2020-03-06 02:30:30 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 03:04:39.872: INFO: Container kube-proxy ready: true, restart count 1 | |
Mar 6 03:04:39.872: INFO: sonobuoy started at 2020-03-06 02:38:02 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 03:04:39.872: INFO: Container kube-sonobuoy ready: true, restart count 0 | |
Mar 6 03:04:39.872: INFO: sample-webhook-deployment-5f65f8c764-d65fz started at 2020-03-06 03:03:44 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 03:04:39.872: INFO: Container sample-webhook ready: true, restart count 0 | |
W0306 03:04:39.875285 19 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. | |
Mar 6 03:04:39.906: INFO: | |
Latency metrics for node worker02 | |
Mar 6 03:04:39.906: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
STEP: Destroying namespace "webhook-7586" for this suite. | |
STEP: Destroying namespace "webhook-7586-markers" for this suite. | |
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 | |
• Failure [56.522 seconds] | |
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 | |
should unconditionally reject operations on fail closed webhook [Conformance] [It] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
Mar 6 03:04:39.712: waiting for webhook configuration to be ready | |
Unexpected error: | |
<*errors.errorString | 0xc0000b3950>: { | |
s: "timed out waiting for the condition", | |
} | |
timed out waiting for the condition | |
occurred | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:1303 | |
------------------------------ | |
{"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":278,"completed":63,"skipped":1279,"failed":5,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]"]} | |
SSSSSSSSSSSSSSSSSSSSSSSSSSS | |
------------------------------ | |
[sig-api-machinery] ResourceQuota | |
should verify ResourceQuota with terminating scopes. [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
[BeforeEach] [sig-api-machinery] ResourceQuota | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 | |
STEP: Creating a kubernetes client | |
Mar 6 03:04:39.968: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 | |
STEP: Building a namespace api object, basename resourcequota | |
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in resourcequota-6301 | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] should verify ResourceQuota with terminating scopes. [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
STEP: Creating a ResourceQuota with terminating scope | |
STEP: Ensuring ResourceQuota status is calculated | |
STEP: Creating a ResourceQuota with not terminating scope | |
STEP: Ensuring ResourceQuota status is calculated | |
STEP: Creating a long running pod | |
STEP: Ensuring resource quota with not terminating scope captures the pod usage | |
STEP: Ensuring resource quota with terminating scope ignored the pod usage | |
STEP: Deleting the pod | |
STEP: Ensuring resource quota status released the pod usage | |
STEP: Creating a terminating pod | |
STEP: Ensuring resource quota with terminating scope captures the pod usage | |
STEP: Ensuring resource quota with not terminating scope ignored the pod usage | |
STEP: Deleting the pod | |
STEP: Ensuring resource quota status released the pod usage | |
[AfterEach] [sig-api-machinery] ResourceQuota | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 | |
Mar 6 03:04:56.191: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
STEP: Destroying namespace "resourcequota-6301" for this suite. | |
• [SLOW TEST:16.229 seconds] | |
[sig-api-machinery] ResourceQuota | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 | |
should verify ResourceQuota with terminating scopes. [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
------------------------------ | |
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":278,"completed":64,"skipped":1306,"failed":5,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]"]} | |
SSSSSSSSSSSSSS | |
------------------------------ | |
[k8s.io] Probing container | |
should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
[BeforeEach] [k8s.io] Probing container | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 | |
STEP: Creating a kubernetes client | |
Mar 6 03:04:56.197: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 | |
STEP: Building a namespace api object, basename container-probe | |
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-probe-1845 | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] [k8s.io] Probing container | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 | |
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
STEP: Creating pod test-webserver-0c5dc7b9-5a24-4918-b926-1ceda7574db4 in namespace container-probe-1845 | |
Mar 6 03:04:58.387: INFO: Started pod test-webserver-0c5dc7b9-5a24-4918-b926-1ceda7574db4 in namespace container-probe-1845 | |
STEP: checking the pod's current state and verifying that restartCount is present | |
Mar 6 03:04:58.389: INFO: Initial restart count of pod test-webserver-0c5dc7b9-5a24-4918-b926-1ceda7574db4 is 0 | |
STEP: deleting the pod | |
[AfterEach] [k8s.io] Probing container | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 | |
Mar 6 03:08:58.734: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
STEP: Destroying namespace "container-probe-1845" for this suite. | |
• [SLOW TEST:242.548 seconds] | |
[k8s.io] Probing container | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 | |
should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
------------------------------ | |
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":65,"skipped":1320,"failed":5,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]"]} | |
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS | |
------------------------------ | |
[sig-cli] Kubectl client Kubectl run default | |
should create an rc or deployment from an image [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
[BeforeEach] [sig-cli] Kubectl client | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 | |
STEP: Creating a kubernetes client | |
Mar 6 03:08:58.745: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 | |
STEP: Building a namespace api object, basename kubectl | |
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-8653 | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] [sig-cli] Kubectl client | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 | |
[BeforeEach] Kubectl run default | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1596 | |
[It] should create an rc or deployment from an image [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
STEP: running the image docker.io/library/httpd:2.4.38-alpine | |
Mar 6 03:08:58.877: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-8653' | |
Mar 6 03:09:03.969: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" | |
Mar 6 03:09:03.969: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n" | |
STEP: verifying the pod controlled by e2e-test-httpd-deployment gets created | |
[AfterEach] Kubectl run default | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1602 | |
Mar 6 03:09:05.976: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 delete deployment e2e-test-httpd-deployment --namespace=kubectl-8653' | |
Mar 6 03:09:21.060: INFO: stderr: "" | |
Mar 6 03:09:21.060: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n" | |
[AfterEach] [sig-cli] Kubectl client | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 | |
Mar 6 03:09:21.060: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
STEP: Destroying namespace "kubectl-8653" for this suite. | |
• [SLOW TEST:22.323 seconds] | |
[sig-cli] Kubectl client | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 | |
Kubectl run default | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1590 | |
should create an rc or deployment from an image [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
------------------------------ | |
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image [Conformance]","total":278,"completed":66,"skipped":1384,"failed":5,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]"]} | |
SSSSSSSSSS | |
------------------------------ | |
[sig-storage] Projected configMap | |
should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
[BeforeEach] [sig-storage] Projected configMap | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 | |
STEP: Creating a kubernetes client | |
Mar 6 03:09:21.068: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 | |
STEP: Building a namespace api object, basename projected | |
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-1351 | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
STEP: Creating configMap with name projected-configmap-test-volume-bcd6bd70-65b1-4e89-b2c6-9900ba16fbfd | |
STEP: Creating a pod to test consume configMaps | |
Mar 6 03:09:21.211: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ca070575-167b-4031-814b-cd124e89c10b" in namespace "projected-1351" to be "success or failure" | |
Mar 6 03:09:21.213: INFO: Pod "pod-projected-configmaps-ca070575-167b-4031-814b-cd124e89c10b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.371444ms | |
Mar 6 03:09:23.216: INFO: Pod "pod-projected-configmaps-ca070575-167b-4031-814b-cd124e89c10b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005258599s | |
STEP: Saw pod success | |
Mar 6 03:09:23.216: INFO: Pod "pod-projected-configmaps-ca070575-167b-4031-814b-cd124e89c10b" satisfied condition "success or failure" | |
Mar 6 03:09:23.219: INFO: Trying to get logs from node worker02 pod pod-projected-configmaps-ca070575-167b-4031-814b-cd124e89c10b container projected-configmap-volume-test: <nil> | |
STEP: delete the pod | |
Mar 6 03:09:23.238: INFO: Waiting for pod pod-projected-configmaps-ca070575-167b-4031-814b-cd124e89c10b to disappear | |
Mar 6 03:09:23.240: INFO: Pod pod-projected-configmaps-ca070575-167b-4031-814b-cd124e89c10b no longer exists | |
[AfterEach] [sig-storage] Projected configMap | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 | |
Mar 6 03:09:23.240: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
STEP: Destroying namespace "projected-1351" for this suite. | |
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":67,"skipped":1394,"failed":5,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]"]} | |
SSSSSSSSSSSS | |
------------------------------ | |
[sig-storage] EmptyDir volumes | |
should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
[BeforeEach] [sig-storage] EmptyDir volumes | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 | |
STEP: Creating a kubernetes client | |
Mar 6 03:09:23.249: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 | |
STEP: Building a namespace api object, basename emptydir | |
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-9953 | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
STEP: Creating a pod to test emptydir 0777 on tmpfs | |
Mar 6 03:09:23.385: INFO: Waiting up to 5m0s for pod "pod-278da763-c75e-417d-b575-cb3276a07b47" in namespace "emptydir-9953" to be "success or failure" | |
Mar 6 03:09:23.387: INFO: Pod "pod-278da763-c75e-417d-b575-cb3276a07b47": Phase="Pending", Reason="", readiness=false. Elapsed: 1.805488ms | |
Mar 6 03:09:25.390: INFO: Pod "pod-278da763-c75e-417d-b575-cb3276a07b47": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00502025s | |
STEP: Saw pod success | |
Mar 6 03:09:25.390: INFO: Pod "pod-278da763-c75e-417d-b575-cb3276a07b47" satisfied condition "success or failure" | |
Mar 6 03:09:25.392: INFO: Trying to get logs from node worker02 pod pod-278da763-c75e-417d-b575-cb3276a07b47 container test-container: <nil> | |
STEP: delete the pod | |
Mar 6 03:09:25.411: INFO: Waiting for pod pod-278da763-c75e-417d-b575-cb3276a07b47 to disappear | |
Mar 6 03:09:25.413: INFO: Pod pod-278da763-c75e-417d-b575-cb3276a07b47 no longer exists | |
[AfterEach] [sig-storage] EmptyDir volumes | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 | |
Mar 6 03:09:25.413: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
STEP: Destroying namespace "emptydir-9953" for this suite. | |
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":68,"skipped":1406,"failed":5,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]"]} | |
SSSSSSSSSSSSSS | |
------------------------------ | |
[sig-network] DNS | |
should provide DNS for ExternalName services [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
[BeforeEach] [sig-network] DNS | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 | |
STEP: Creating a kubernetes client | |
Mar 6 03:09:25.424: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 | |
STEP: Building a namespace api object, basename dns | |
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in dns-5611 | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] should provide DNS for ExternalName services [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
STEP: Creating a test externalName service | |
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5611.svc.cluster.local CNAME > /results/[email protected]; sleep 1; done | |
STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5611.svc.cluster.local CNAME > /results/[email protected]; sleep 1; done | |
STEP: creating a pod to probe DNS | |
STEP: submitting the pod to kubernetes | |
STEP: retrieving the pod | |
STEP: looking for the results for each expected name from probers | |
Mar 6 03:09:27.577: INFO: DNS probes using dns-test-a98920a4-b718-44c7-bc2d-8cd0482cf687 succeeded | |
STEP: deleting the pod | |
STEP: changing the externalName to bar.example.com | |
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5611.svc.cluster.local CNAME > /results/[email protected]; sleep 1; done | |
STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5611.svc.cluster.local CNAME > /results/[email protected]; sleep 1; done | |
STEP: creating a second pod to probe DNS | |
STEP: submitting the pod to kubernetes | |
STEP: retrieving the pod | |
STEP: looking for the results for each expected name from probers | |
Mar 6 03:09:29.621: INFO: DNS probes using dns-test-6efb8304-3eac-44f0-87f6-206056379368 succeeded | |
STEP: deleting the pod | |
STEP: changing the service to type=ClusterIP | |
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5611.svc.cluster.local A > /results/[email protected]; sleep 1; done | |
STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5611.svc.cluster.local A > /results/[email protected]; sleep 1; done | |
STEP: creating a third pod to probe DNS | |
STEP: submitting the pod to kubernetes | |
STEP: retrieving the pod | |
STEP: looking for the results for each expected name from probers | |
Mar 6 03:09:31.730: INFO: DNS probes using dns-test-2487b07c-a4d6-49c1-8210-f360ba516a76 succeeded | |
STEP: deleting the pod | |
STEP: deleting the test externalName service | |
[AfterEach] [sig-network] DNS | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 | |
Mar 6 03:09:31.759: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
STEP: Destroying namespace "dns-5611" for this suite. | |
• [SLOW TEST:6.348 seconds] | |
[sig-network] DNS | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 | |
should provide DNS for ExternalName services [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
------------------------------ | |
{"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":278,"completed":69,"skipped":1420,"failed":5,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]"]} | |
SS | |
------------------------------ | |
[k8s.io] Probing container | |
with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
[BeforeEach] [k8s.io] Probing container | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 | |
STEP: Creating a kubernetes client | |
Mar 6 03:09:31.773: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 | |
STEP: Building a namespace api object, basename container-probe | |
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-probe-1189 | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] [k8s.io] Probing container | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 | |
[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
Mar 6 03:09:49.940: INFO: Container started at 2020-03-06 03:09:32 +0000 UTC, pod became ready at 2020-03-06 03:09:48 +0000 UTC | |
[AfterEach] [k8s.io] Probing container | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 | |
Mar 6 03:09:49.940: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
STEP: Destroying namespace "container-probe-1189" for this suite. | |
• [SLOW TEST:18.174 seconds] | |
[k8s.io] Probing container | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 | |
with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
------------------------------ | |
{"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":278,"completed":70,"skipped":1422,"failed":5,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]"]} | |
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS | |
------------------------------ | |
[sig-api-machinery] Garbage collector | |
should not be blocked by dependency circle [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
[BeforeEach] [sig-api-machinery] Garbage collector | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 | |
STEP: Creating a kubernetes client | |
Mar 6 03:09:49.947: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 | |
STEP: Building a namespace api object, basename gc | |
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in gc-5136 | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] should not be blocked by dependency circle [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
Mar 6 03:09:50.098: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"f6019465-8be0-4be4-996d-73b89ad09d94", Controller:(*bool)(0xc00541481a), BlockOwnerDeletion:(*bool)(0xc00541481b)}} | |
Mar 6 03:09:50.103: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"b9e7854e-ef5e-457f-907f-a0c5ad02d48e", Controller:(*bool)(0xc00543af36), BlockOwnerDeletion:(*bool)(0xc00543af37)}} | |
Mar 6 03:09:50.111: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"00a9a301-1c85-445b-aaaa-b55f1b1cba3f", Controller:(*bool)(0xc0054149e6), BlockOwnerDeletion:(*bool)(0xc0054149e7)}} | |
[AfterEach] [sig-api-machinery] Garbage collector | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 | |
Mar 6 03:09:55.120: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
STEP: Destroying namespace "gc-5136" for this suite. | |
• [SLOW TEST:5.181 seconds] | |
[sig-api-machinery] Garbage collector | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 | |
should not be blocked by dependency circle [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
------------------------------ | |
{"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":278,"completed":71,"skipped":1454,"failed":5,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]"]} | |
S | |
------------------------------ | |
[sig-storage] Projected configMap | |
should be consumable from pods in volume with mappings [NodeConformance] [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
[BeforeEach] [sig-storage] Projected configMap | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 | |
STEP: Creating a kubernetes client | |
Mar 6 03:09:55.128: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 | |
STEP: Building a namespace api object, basename projected | |
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-8972 | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
STEP: Creating configMap with name projected-configmap-test-volume-map-8b9a1de0-c15e-4d39-b0e6-46dd8f6fffba | |
STEP: Creating a pod to test consume configMaps | |
Mar 6 03:09:55.264: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-8828e7b2-5648-454a-8117-fee67b1efb80" in namespace "projected-8972" to be "success or failure" | |
Mar 6 03:09:55.266: INFO: Pod "pod-projected-configmaps-8828e7b2-5648-454a-8117-fee67b1efb80": Phase="Pending", Reason="", readiness=false. Elapsed: 1.708077ms | |
Mar 6 03:09:57.269: INFO: Pod "pod-projected-configmaps-8828e7b2-5648-454a-8117-fee67b1efb80": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.004295324s | |
STEP: Saw pod success | |
Mar 6 03:09:57.269: INFO: Pod "pod-projected-configmaps-8828e7b2-5648-454a-8117-fee67b1efb80" satisfied condition "success or failure" | |
Mar 6 03:09:57.270: INFO: Trying to get logs from node worker02 pod pod-projected-configmaps-8828e7b2-5648-454a-8117-fee67b1efb80 container projected-configmap-volume-test: <nil> | |
STEP: delete the pod | |
Mar 6 03:09:57.285: INFO: Waiting for pod pod-projected-configmaps-8828e7b2-5648-454a-8117-fee67b1efb80 to disappear | |
Mar 6 03:09:57.286: INFO: Pod pod-projected-configmaps-8828e7b2-5648-454a-8117-fee67b1efb80 no longer exists | |
[AfterEach] [sig-storage] Projected configMap | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 | |
Mar 6 03:09:57.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
STEP: Destroying namespace "projected-8972" for this suite. | |
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":72,"skipped":1455,"failed":5,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]"]} | |
SSSSS | |
------------------------------ | |
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] | |
works for multiple CRDs of same group and version but different kinds [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 | |
STEP: Creating a kubernetes client | |
Mar 6 03:09:57.293: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 | |
STEP: Building a namespace api object, basename crd-publish-openapi | |
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in crd-publish-openapi-9322 | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] works for multiple CRDs of same group and version but different kinds [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation | |
Mar 6 03:09:57.432: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 | |
Mar 6 03:10:11.268: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 | |
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 | |
Mar 6 03:10:52.741: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
STEP: Destroying namespace "crd-publish-openapi-9322" for this suite. | |
• [SLOW TEST:55.455 seconds] | |
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 | |
works for multiple CRDs of same group and version but different kinds [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
------------------------------ | |
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":278,"completed":73,"skipped":1460,"failed":5,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]"]} | |
SSSSSSSS | |
------------------------------ | |
[k8s.io] KubeletManagedEtcHosts | |
should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
[BeforeEach] [k8s.io] KubeletManagedEtcHosts | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 | |
STEP: Creating a kubernetes client | |
Mar 6 03:10:52.748: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 | |
STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts | |
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-kubelet-etc-hosts-4633 | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
STEP: Setting up the test | |
STEP: Creating hostNetwork=false pod | |
STEP: Creating hostNetwork=true pod | |
STEP: Running the test | |
STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false | |
Mar 6 03:10:56.898: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4633 PodName:test-pod ContainerName:busybox-1 Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} | |
Mar 6 03:10:56.898: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 | |
Mar 6 03:10:57.015: INFO: Exec stderr: "" | |
Mar 6 03:10:57.015: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4633 PodName:test-pod ContainerName:busybox-1 Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} | |
Mar 6 03:10:57.015: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 | |
Mar 6 03:10:57.151: INFO: Exec stderr: "" | |
Mar 6 03:10:57.151: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4633 PodName:test-pod ContainerName:busybox-2 Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} | |
Mar 6 03:10:57.151: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 | |
Mar 6 03:10:57.281: INFO: Exec stderr: "" | |
Mar 6 03:10:57.281: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4633 PodName:test-pod ContainerName:busybox-2 Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} | |
Mar 6 03:10:57.281: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 | |
Mar 6 03:10:57.418: INFO: Exec stderr: "" | |
STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount | |
Mar 6 03:10:57.418: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4633 PodName:test-pod ContainerName:busybox-3 Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} | |
Mar 6 03:10:57.418: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 | |
Mar 6 03:10:57.545: INFO: Exec stderr: "" | |
Mar 6 03:10:57.545: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4633 PodName:test-pod ContainerName:busybox-3 Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} | |
Mar 6 03:10:57.545: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 | |
Mar 6 03:10:57.679: INFO: Exec stderr: "" | |
STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true | |
Mar 6 03:10:57.679: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4633 PodName:test-host-network-pod ContainerName:busybox-1 Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} | |
Mar 6 03:10:57.679: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 | |
Mar 6 03:10:57.802: INFO: Exec stderr: "" | |
Mar 6 03:10:57.802: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4633 PodName:test-host-network-pod ContainerName:busybox-1 Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} | |
Mar 6 03:10:57.802: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 | |
Mar 6 03:10:57.952: INFO: Exec stderr: "" | |
Mar 6 03:10:57.952: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4633 PodName:test-host-network-pod ContainerName:busybox-2 Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} | |
Mar 6 03:10:57.952: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 | |
Mar 6 03:10:58.082: INFO: Exec stderr: "" | |
Mar 6 03:10:58.082: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4633 PodName:test-host-network-pod ContainerName:busybox-2 Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} | |
Mar 6 03:10:58.082: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 | |
Mar 6 03:10:58.214: INFO: Exec stderr: "" | |
[AfterEach] [k8s.io] KubeletManagedEtcHosts | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 | |
Mar 6 03:10:58.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
STEP: Destroying namespace "e2e-kubelet-etc-hosts-4633" for this suite. | |
• [SLOW TEST:5.474 seconds] | |
[k8s.io] KubeletManagedEtcHosts | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 | |
should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
------------------------------ | |
{"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":74,"skipped":1468,"failed":5,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]"]} | |
SSSSSSS | |
------------------------------ | |
[sig-cli] Kubectl client Kubectl expose | |
should create services for rc [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
[BeforeEach] [sig-cli] Kubectl client | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 | |
STEP: Creating a kubernetes client | |
Mar 6 03:10:58.222: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 | |
STEP: Building a namespace api object, basename kubectl | |
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-8310 | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] [sig-cli] Kubectl client | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 | |
[It] should create services for rc [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
STEP: creating Agnhost RC | |
Mar 6 03:10:58.351: INFO: namespace kubectl-8310 | |
Mar 6 03:10:58.351: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 create -f - --namespace=kubectl-8310' | |
Mar 6 03:11:03.547: INFO: stderr: "" | |
Mar 6 03:11:03.547: INFO: stdout: "replicationcontroller/agnhost-master created\n" | |
STEP: Waiting for Agnhost master to start. | |
Mar 6 03:11:04.550: INFO: Selector matched 1 pods for map[app:agnhost] | |
Mar 6 03:11:04.550: INFO: Found 0 / 1 | |
Mar 6 03:11:05.550: INFO: Selector matched 1 pods for map[app:agnhost] | |
Mar 6 03:11:05.550: INFO: Found 1 / 1 | |
Mar 6 03:11:05.550: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 | |
Mar 6 03:11:05.552: INFO: Selector matched 1 pods for map[app:agnhost] | |
Mar 6 03:11:05.552: INFO: ForEach: Found 1 pods from the filter. Now looping through them. | |
Mar 6 03:11:05.552: INFO: wait on agnhost-master startup in kubectl-8310 | |
Mar 6 03:11:05.552: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 logs agnhost-master-n6v7c agnhost-master --namespace=kubectl-8310' | |
Mar 6 03:11:15.639: INFO: stderr: "" | |
Mar 6 03:11:15.639: INFO: stdout: "Paused\n" | |
STEP: exposing RC | |
Mar 6 03:11:15.639: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-8310' | |
Mar 6 03:11:35.746: INFO: stderr: "" | |
Mar 6 03:11:35.746: INFO: stdout: "service/rm2 exposed\n" | |
Mar 6 03:11:35.749: INFO: Service rm2 in namespace kubectl-8310 found. | |
STEP: exposing service | |
Mar 6 03:11:37.753: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-8310' | |
Mar 6 03:11:57.860: INFO: stderr: "" | |
Mar 6 03:11:57.860: INFO: stdout: "service/rm3 exposed\n" | |
Mar 6 03:11:57.865: INFO: Service rm3 in namespace kubectl-8310 found. | |
[AfterEach] [sig-cli] Kubectl client | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 | |
Mar 6 03:11:59.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
STEP: Destroying namespace "kubectl-8310" for this suite. | |
• [SLOW TEST:61.658 seconds] | |
[sig-cli] Kubectl client | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 | |
Kubectl expose | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1295 | |
should create services for rc [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
------------------------------ | |
{"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":278,"completed":75,"skipped":1475,"failed":5,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]"]} | |
S | |
------------------------------ | |
[sig-storage] Projected downwardAPI | |
should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
[BeforeEach] [sig-storage] Projected downwardAPI | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 | |
STEP: Creating a kubernetes client | |
Mar 6 03:11:59.880: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 | |
STEP: Building a namespace api object, basename projected | |
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-1827 | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] [sig-storage] Projected downwardAPI | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 | |
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
STEP: Creating a pod to test downward API volume plugin | |
Mar 6 03:12:00.023: INFO: Waiting up to 5m0s for pod "downwardapi-volume-99ebdfa5-f72f-41f5-9f13-aa9850b36910" in namespace "projected-1827" to be "success or failure" | |
Mar 6 03:12:00.027: INFO: Pod "downwardapi-volume-99ebdfa5-f72f-41f5-9f13-aa9850b36910": Phase="Pending", Reason="", readiness=false. Elapsed: 4.322694ms | |
Mar 6 03:12:02.030: INFO: Pod "downwardapi-volume-99ebdfa5-f72f-41f5-9f13-aa9850b36910": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006909286s | |
STEP: Saw pod success | |
Mar 6 03:12:02.030: INFO: Pod "downwardapi-volume-99ebdfa5-f72f-41f5-9f13-aa9850b36910" satisfied condition "success or failure" | |
Mar 6 03:12:02.032: INFO: Trying to get logs from node worker02 pod downwardapi-volume-99ebdfa5-f72f-41f5-9f13-aa9850b36910 container client-container: <nil> | |
STEP: delete the pod | |
Mar 6 03:12:02.049: INFO: Waiting for pod downwardapi-volume-99ebdfa5-f72f-41f5-9f13-aa9850b36910 to disappear | |
Mar 6 03:12:02.051: INFO: Pod downwardapi-volume-99ebdfa5-f72f-41f5-9f13-aa9850b36910 no longer exists | |
[AfterEach] [sig-storage] Projected downwardAPI | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 | |
Mar 6 03:12:02.052: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
STEP: Destroying namespace "projected-1827" for this suite. | |
•{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":76,"skipped":1476,"failed":5,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]"]} | |
------------------------------ | |
[sig-node] Downward API | |
should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
[BeforeEach] [sig-node] Downward API | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 | |
STEP: Creating a kubernetes client | |
Mar 6 03:12:02.059: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 | |
STEP: Building a namespace api object, basename downward-api | |
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-8731 | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
STEP: Creating a pod to test downward api env vars | |
Mar 6 03:12:02.197: INFO: Waiting up to 5m0s for pod "downward-api-eef76e16-cc20-4726-a72c-de1b7a21af43" in namespace "downward-api-8731" to be "success or failure" | |
Mar 6 03:12:02.200: INFO: Pod "downward-api-eef76e16-cc20-4726-a72c-de1b7a21af43": Phase="Pending", Reason="", readiness=false. Elapsed: 2.319602ms | |
Mar 6 03:12:04.203: INFO: Pod "downward-api-eef76e16-cc20-4726-a72c-de1b7a21af43": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005819907s | |
STEP: Saw pod success | |
Mar 6 03:12:04.203: INFO: Pod "downward-api-eef76e16-cc20-4726-a72c-de1b7a21af43" satisfied condition "success or failure" | |
Mar 6 03:12:04.207: INFO: Trying to get logs from node worker02 pod downward-api-eef76e16-cc20-4726-a72c-de1b7a21af43 container dapi-container: <nil> | |
STEP: delete the pod | |
Mar 6 03:12:04.225: INFO: Waiting for pod downward-api-eef76e16-cc20-4726-a72c-de1b7a21af43 to disappear | |
Mar 6 03:12:04.227: INFO: Pod downward-api-eef76e16-cc20-4726-a72c-de1b7a21af43 no longer exists | |
[AfterEach] [sig-node] Downward API | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 | |
Mar 6 03:12:04.227: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
STEP: Destroying namespace "downward-api-8731" for this suite. | |
•{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":278,"completed":77,"skipped":1476,"failed":5,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]"]} | |
SSSSSSSSSSSSSSSS | |
------------------------------ | |
[sig-cli] Kubectl client Kubectl cluster-info | |
should check if Kubernetes master services is included in cluster-info [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
[BeforeEach] [sig-cli] Kubectl client | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 | |
STEP: Creating a kubernetes client | |
Mar 6 03:12:04.234: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 | |
STEP: Building a namespace api object, basename kubectl | |
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-2028 | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] [sig-cli] Kubectl client | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 | |
[It] should check if Kubernetes master services is included in cluster-info [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
STEP: validating cluster-info | |
Mar 6 03:12:04.364: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 cluster-info' | |
Mar 6 03:12:19.456: INFO: stderr: "" | |
Mar 6 03:12:19.456: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://10.96.0.1:443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://10.96.0.1:443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\x1b[0;32mMetrics-server\x1b[0m is running at \x1b[0;33mhttps://10.96.0.1:443/api/v1/namespaces/kube-system/services/https:metrics-server:/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" | |
[AfterEach] [sig-cli] Kubectl client | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 | |
Mar 6 03:12:19.456: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
STEP: Destroying namespace "kubectl-2028" for this suite. | |
• [SLOW TEST:15.230 seconds] | |
[sig-cli] Kubectl client | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 | |
Kubectl cluster-info | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1128 | |
should check if Kubernetes master services is included in cluster-info [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
------------------------------ | |
{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance]","total":278,"completed":78,"skipped":1492,"failed":5,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]"]} | |
SSSSSSSS | |
------------------------------ | |
[sig-apps] Deployment | |
RecreateDeployment should delete old pods and create new ones [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
[BeforeEach] [sig-apps] Deployment | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 | |
STEP: Creating a kubernetes client | |
Mar 6 03:12:19.465: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 | |
STEP: Building a namespace api object, basename deployment | |
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in deployment-1237 | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] [sig-apps] Deployment | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 | |
[It] RecreateDeployment should delete old pods and create new ones [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
Mar 6 03:12:19.597: INFO: Creating deployment "test-recreate-deployment" | |
Mar 6 03:12:19.599: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 | |
Mar 6 03:12:19.609: INFO: deployment "test-recreate-deployment" doesn't have the required revision set | |
Mar 6 03:12:21.617: INFO: Waiting deployment "test-recreate-deployment" to complete | |
Mar 6 03:12:21.620: INFO: Triggering a new rollout for deployment "test-recreate-deployment" | |
Mar 6 03:12:21.625: INFO: Updating deployment test-recreate-deployment | |
Mar 6 03:12:21.625: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods | |
[AfterEach] [sig-apps] Deployment | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 | |
Mar 6 03:12:21.682: INFO: Deployment "test-recreate-deployment": | |
&Deployment{ObjectMeta:{test-recreate-deployment deployment-1237 /apis/apps/v1/namespaces/deployment-1237/deployments/test-recreate-deployment 41d8c8b2-781d-489f-84f8-d3e22eb94ae3 12545 2 2020-03-06 03:12:19 +0000 UTC <nil> <nil> map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0031ba108 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-03-06 03:12:21 +0000 UTC,LastTransitionTime:2020-03-06 03:12:21 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-5f94c574ff" is progressing.,LastUpdateTime:2020-03-06 03:12:21 +0000 UTC,LastTransitionTime:2020-03-06 03:12:19 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} | |
Mar 6 03:12:21.684: INFO: New ReplicaSet "test-recreate-deployment-5f94c574ff" of Deployment "test-recreate-deployment": | |
&ReplicaSet{ObjectMeta:{test-recreate-deployment-5f94c574ff deployment-1237 /apis/apps/v1/namespaces/deployment-1237/replicasets/test-recreate-deployment-5f94c574ff dd501a14-4caf-4276-b773-8495c7dd986e 12544 1 2020-03-06 03:12:21 +0000 UTC <nil> <nil> map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 41d8c8b2-781d-489f-84f8-d3e22eb94ae3 0xc002f65bc7 0xc002f65bc8}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5f94c574ff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002f65c28 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} | |
Mar 6 03:12:21.684: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": | |
Mar 6 03:12:21.684: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-799c574856 deployment-1237 /apis/apps/v1/namespaces/deployment-1237/replicasets/test-recreate-deployment-799c574856 63f28a5a-1b51-475c-bb3f-f06bb0f5239f 12534 2 2020-03-06 03:12:19 +0000 UTC <nil> <nil> map[name:sample-pod-3 pod-template-hash:799c574856] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 41d8c8b2-781d-489f-84f8-d3e22eb94ae3 0xc002f65c97 0xc002f65c98}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 799c574856,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:sample-pod-3 pod-template-hash:799c574856] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002f65d08 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} | |
Mar 6 03:12:21.690: INFO: Pod "test-recreate-deployment-5f94c574ff-thfw7" is not available: | |
&Pod{ObjectMeta:{test-recreate-deployment-5f94c574ff-thfw7 test-recreate-deployment-5f94c574ff- deployment-1237 /api/v1/namespaces/deployment-1237/pods/test-recreate-deployment-5f94c574ff-thfw7 4863d25f-2248-43d5-8978-9e0b2b98f006 12546 0 2020-03-06 03:12:21 +0000 UTC <nil> <nil> map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [{apps/v1 ReplicaSet test-recreate-deployment-5f94c574ff dd501a14-4caf-4276-b773-8495c7dd986e 0xc0031ba587 0xc0031ba588}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dc9j4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dc9j4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dc9j4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:worker02,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-06 03:12:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-06 03:12:21 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-06 03:12:21 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-06 03:12:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.1.251,PodIP:,StartTime:2020-03-06 03:12:21 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} | |
[AfterEach] [sig-apps] Deployment | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 | |
Mar 6 03:12:21.690: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
STEP: Destroying namespace "deployment-1237" for this suite. | |
•{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":79,"skipped":1500,"failed":5,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]"]} | |
SSSSSSSSSSSS | |
------------------------------ | |
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] | |
should be able to deny custom resource creation, update and deletion [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 | |
STEP: Creating a kubernetes client | |
Mar 6 03:12:21.698: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 | |
STEP: Building a namespace api object, basename webhook | |
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-1905 | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 | |
STEP: Setting up server cert | |
STEP: Create role binding to let webhook read extension-apiserver-authentication | |
STEP: Deploying the webhook pod | |
STEP: Wait for the deployment to be ready | |
Mar 6 03:12:22.798: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set | |
STEP: Deploying the webhook service | |
STEP: Verifying the service has paired with the endpoint | |
Mar 6 03:12:25.821: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 | |
[It] should be able to deny custom resource creation, update and deletion [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
Mar 6 03:12:25.824: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 | |
STEP: Registering the custom resource webhook via the AdmissionRegistration API | |
Mar 6 03:12:30.879: INFO: Waiting for webhook configuration to be ready... | |
Mar 6 03:12:40.989: INFO: Waiting for webhook configuration to be ready... | |
Mar 6 03:12:51.088: INFO: Waiting for webhook configuration to be ready... | |
Mar 6 03:13:01.191: INFO: Waiting for webhook configuration to be ready... | |
Mar 6 03:13:11.203: INFO: Waiting for webhook configuration to be ready... | |
Mar 6 03:13:11.203: FAIL: waiting for webhook configuration to be ready | |
Unexpected error: | |
<*errors.errorString | 0xc0000b3950>: { | |
s: "timed out waiting for the condition", | |
} | |
timed out waiting for the condition | |
occurred | |
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 | |
STEP: Collecting events from namespace "webhook-1905". | |
STEP: Found 6 events. | |
Mar 6 03:13:11.715: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for sample-webhook-deployment-5f65f8c764-2skpg: {default-scheduler } Scheduled: Successfully assigned webhook-1905/sample-webhook-deployment-5f65f8c764-2skpg to worker02 | |
Mar 6 03:13:11.715: INFO: At 2020-03-06 03:12:22 +0000 UTC - event for sample-webhook-deployment: {deployment-controller } ScalingReplicaSet: Scaled up replica set sample-webhook-deployment-5f65f8c764 to 1 | |
Mar 6 03:13:11.715: INFO: At 2020-03-06 03:12:22 +0000 UTC - event for sample-webhook-deployment-5f65f8c764: {replicaset-controller } SuccessfulCreate: Created pod: sample-webhook-deployment-5f65f8c764-2skpg | |
Mar 6 03:13:11.715: INFO: At 2020-03-06 03:12:23 +0000 UTC - event for sample-webhook-deployment-5f65f8c764-2skpg: {kubelet worker02} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.8" already present on machine | |
Mar 6 03:13:11.715: INFO: At 2020-03-06 03:12:23 +0000 UTC - event for sample-webhook-deployment-5f65f8c764-2skpg: {kubelet worker02} Created: Created container sample-webhook | |
Mar 6 03:13:11.715: INFO: At 2020-03-06 03:12:23 +0000 UTC - event for sample-webhook-deployment-5f65f8c764-2skpg: {kubelet worker02} Started: Started container sample-webhook | |
Mar 6 03:13:11.720: INFO: POD NODE PHASE GRACE CONDITIONS | |
Mar 6 03:13:11.720: INFO: sample-webhook-deployment-5f65f8c764-2skpg worker02 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-06 03:12:22 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-06 03:12:24 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-06 03:12:24 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-06 03:12:22 +0000 UTC }] | |
Mar 6 03:13:11.720: INFO: | |
Mar 6 03:13:11.723: INFO: | |
Logging node info for node master01 | |
Mar 6 03:13:11.725: INFO: Node Info: &Node{ObjectMeta:{master01 /api/v1/nodes/master01 aeae8a5b-4e17-4702-bb02-bcfde6cdb12a 11359 0 2020-03-06 02:29:18 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master01 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"76:15:82:0d:8b:ab"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:192.168.1.247 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{42140479488 0} {<nil>} 41152812Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3823214592 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{37926431477 0} {<nil>} 37926431477 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3718356992 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-03-06 03:08:59 +0000 UTC,LastTransitionTime:2020-03-06 02:29:18 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-03-06 03:08:59 +0000 UTC,LastTransitionTime:2020-03-06 02:29:18 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-03-06 03:08:59 +0000 UTC,LastTransitionTime:2020-03-06 02:29:18 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-03-06 03:08:59 +0000 UTC,LastTransitionTime:2020-03-06 02:30:33 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:192.168.1.247,},NodeAddress{Type:Hostname,Address:master01,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:20200220105402131453637367482142,SystemUUID:195205FE-EE72-4794-8EAA-AC554EFDEC9B,BootID:6a3bf627-7476-4f52-84fa-f3eab6d26427,KernelVersion:3.10.0-1062.12.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.5,KubeletVersion:v1.17.3,KubeProxyVersion:v1.17.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[192.168.1.252/library/k8s-keepalived@sha256:3db0032ef2feef675710595681cf9463470af179cd324c6773e831b6649ef785 192.168.1.252/library/k8s-keepalived:1.3.5],SizeBytes:356553439,},ContainerImage{Names:[sonobuoy/systemd-logs@sha256:fadad24a66ddd544987c38811108a73d1a306dd3b5e3f090b207786f2825ffde sonobuoy/systemd-logs:v0.3],SizeBytes:297365055,},ContainerImage{Names:[192.168.1.252/library/etcd-amd64@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216 192.168.1.252/library/etcd-amd64:3.4.3-0],SizeBytes:288426917,},ContainerImage{Names:[192.168.1.252/library/node@sha256:3226b047a7034918a05c986347c5fb4d2cce6d0844f325851bfba586271ee617 192.168.1.252/library/node:v3.12.0],SizeBytes:257501722,},ContainerImage{Names:[192.168.1.252/library/cni@sha256:dc3bc525f1d3b794db1f2a7ceb7d8b84699d13e1431fbc117063f7e2075ff4b5 192.168.1.252/library/cni:v3.12.0],SizeBytes:206678344,},ContainerImage{Names:[192.168.1.252/library/kube-apiserver@sha256:4ee4113bcce32ae436b15364b09a2439dee15f6a19bbcf43a470d3dc879b0c62 192.168.1.252/library/kube-apiserver:v1.17.3],SizeBytes:170986003,},ContainerImage{Names:[192.168.1.252/library/envoy@sha256:b36ee021fc4d285de7861dbaee01e7437ce1d63814ead6ae3e4dfcad4a951b2e 192.168.1.252/library/envoy:v1.12.2],SizeBytes:170487454,},ContainerImage{Names:[192.168.1.252/library/kube-controller-manager@sha256:8f03391c0d22e3da8d22725178efe3c4338e1920504f7f4eb4a1c7f5f40c4c6e 192.168.1.252/library/kube-controller-manager:v1.17.3],SizeBytes:160918035,},ContainerImage{Names:[192.168.1.252/library/dashboard@sha256:4e0d39dae7e089b77fe2bbcef648f89905716db9c1f0884950bfd42d9f446c29 192.168.1.252/library/dashboard:v2.0.0-rc5],SizeBytes:126359420,},ContainerImage{Names:[192.168.1.252/library/kube-proxy@sha256:8dfd672298f8fbcd37731e0f2e45be87199cee6ee1f482ffec766693dcec0ec4 192.168.1.252/library/kube-proxy:v1.17.3],SizeBytes:115964919,},ContainerImage{Names:[192.168.1.252/library/pod2daemon-flexvol@sha256:2bf967507ad1adb749f3484b5d39e7d7b8700c4a0f836e8093dae5c57a585ccf 192.168.1.252/library/pod2daemon-flexvol:v3.12.0],SizeBytes:111122324,},ContainerImage{Names:[192.168.1.252/library/kube-scheduler@sha256:eea535b58da5b2fb66b32c61c6913d014cf85061a39527ad9bca2fa84b53dc1c 192.168.1.252/library/kube-scheduler:v1.17.3],SizeBytes:94435859,},ContainerImage{Names:[sonobuoy/sonobuoy@sha256:73f9cfe546ac6d0d5b94308293484c29c3b02fc341a975cfca97c80dd8728ed7 sonobuoy/sonobuoy:v0.17.2],SizeBytes:84339798,},ContainerImage{Names:[192.168.1.252/library/k8s-haproxy@sha256:f88bca67a2782e7bd88af168d5c11216c32104d12fe9240fac54a1d3196e3f9c 192.168.1.252/library/k8s-haproxy:2.0.0],SizeBytes:73550856,},ContainerImage{Names:[192.168.1.252/library/kube-controllers@sha256:edf14a5bcc663d2b0013b1830469626b7aa27206cbc7715ed83c042890ca5837 192.168.1.252/library/kube-controllers:v3.12.0],SizeBytes:56567983,},ContainerImage{Names:[192.168.1.252/library/typha@sha256:3baf9aef445a3224160748d6f560426eab798d6c65620020b2466e114bf6805f 192.168.1.252/library/typha:v3.12.0],SizeBytes:56034822,},ContainerImage{Names:[192.168.1.252/library/flannel@sha256:bd76b84c74ad70368a2341c2402841b75950df881388e43fc2aca000c546653a 192.168.1.252/library/flannel:v0.11.0-amd64],SizeBytes:52567296,},ContainerImage{Names:[192.168.1.252/library/ctl@sha256:128e4c95cf92a482496d591c43cad2a6a21fab1f0e8a8f13e8503f1324106dc8 192.168.1.252/library/ctl:v3.12.0],SizeBytes:47895826,},ContainerImage{Names:[192.168.1.252/library/coredns@sha256:608ac7ccba5ce41c6941fca13bc67059c1eef927fd968b554b790e21cc92543c 192.168.1.252/library/coredns:1.6.5],SizeBytes:41578211,},ContainerImage{Names:[192.168.1.252/library/metrics-scraper@sha256:e24a74b3b1cdc84d6285d507a12eb06907fd8c457b3e8ae9baa9418eca43efc4 192.168.1.252/library/metrics-scraper:v1.0.3],SizeBytes:40105664,},ContainerImage{Names:[192.168.1.252/library/metrics-server-amd64@sha256:c9c4e95068b51d6b33a9dccc61875df07dc650abbf4ac1a19d58b4628f89288b 192.168.1.252/library/metrics-server-amd64:v0.3.6],SizeBytes:39944451,},ContainerImage{Names:[192.168.1.252/library/contour@sha256:3e10c69dfeaa830b84a50e6b47ce90e0f5a1aa84daf77f7662313077fa9579cf 192.168.1.252/library/contour:v1.1.0],SizeBytes:35721216,},ContainerImage{Names:[192.168.1.252/library/kuard-amd64@sha256:bd17153e9a3319f401acc7a27759243f37d422c06cbbf01cb3e1f54bbbfe14f4 192.168.1.252/library/kuard-amd64:1],SizeBytes:19745911,},ContainerImage{Names:[192.168.1.252/library/pause@sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d 192.168.1.252/library/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} | |
Mar 6 03:13:11.725: INFO: | |
Logging kubelet events for node master01 | |
Mar 6 03:13:11.729: INFO: | |
Logging pods the kubelet thinks is on node master01 | |
Mar 6 03:13:11.739: INFO: sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-drhpn started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded) | |
Mar 6 03:13:11.739: INFO: Container sonobuoy-worker ready: true, restart count 0 | |
Mar 6 03:13:11.739: INFO: Container systemd-logs ready: true, restart count 0 | |
Mar 6 03:13:11.739: INFO: kube-flannel-ds-amd64-6mbnb started at 2020-03-06 02:30:22 +0000 UTC (1+1 container statuses recorded) | |
Mar 6 03:13:11.739: INFO: Init container install-cni ready: true, restart count 0 | |
Mar 6 03:13:11.739: INFO: Container kube-flannel ready: true, restart count 0 | |
Mar 6 03:13:11.739: INFO: kube-proxy-4j8ft started at 2020-03-06 02:30:22 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 03:13:11.739: INFO: Container kube-proxy ready: true, restart count 0 | |
Mar 6 03:13:11.739: INFO: kube-apiserver-master01 started at 2020-03-06 02:30:22 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 03:13:11.739: INFO: Container kube-apiserver ready: true, restart count 0 | |
Mar 6 03:13:11.739: INFO: kube-controller-manager-master01 started at 2020-03-06 02:30:22 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 03:13:11.739: INFO: Container kube-controller-manager ready: true, restart count 1 | |
Mar 6 03:13:11.739: INFO: kube-scheduler-master01 started at 2020-03-06 02:30:22 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 03:13:11.739: INFO: Container kube-scheduler ready: true, restart count 1 | |
W0306 03:13:11.742353 19 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. | |
Mar 6 03:13:11.760: INFO: | |
Latency metrics for node master01 | |
Mar 6 03:13:11.760: INFO: | |
Logging node info for node master02 | |
Mar 6 03:13:11.762: INFO: Node Info: &Node{ObjectMeta:{master02 /api/v1/nodes/master02 6a0ecb6f-ef31-4754-858b-3eba76999224 11338 0 2020-03-06 02:29:17 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master02 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"82:c1:38:99:3b:39"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:192.168.1.248 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{42140479488 0} {<nil>} 41152812Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3823214592 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{37926431477 0} {<nil>} 37926431477 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3718356992 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-03-06 03:08:56 +0000 UTC,LastTransitionTime:2020-03-06 02:29:16 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-03-06 03:08:56 +0000 UTC,LastTransitionTime:2020-03-06 02:29:16 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-03-06 03:08:56 +0000 UTC,LastTransitionTime:2020-03-06 02:29:16 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-03-06 03:08:56 +0000 UTC,LastTransitionTime:2020-03-06 02:30:10 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:192.168.1.248,},NodeAddress{Type:Hostname,Address:master02,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:20200220105402131453637367482142,SystemUUID:8B2C4639-6D22-4D0D-A03C-F6D7E328F9D5,BootID:efd7329f-ae31-4806-ba13-7fdd5fad57df,KernelVersion:3.10.0-1062.12.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.5,KubeletVersion:v1.17.3,KubeProxyVersion:v1.17.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[sonobuoy/systemd-logs@sha256:fadad24a66ddd544987c38811108a73d1a306dd3b5e3f090b207786f2825ffde sonobuoy/systemd-logs:v0.3],SizeBytes:297365055,},ContainerImage{Names:[192.168.1.252/library/etcd-amd64@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216 192.168.1.252/library/etcd-amd64:3.4.3-0],SizeBytes:288426917,},ContainerImage{Names:[192.168.1.252/library/kube-apiserver@sha256:4ee4113bcce32ae436b15364b09a2439dee15f6a19bbcf43a470d3dc879b0c62 192.168.1.252/library/kube-apiserver:v1.17.3],SizeBytes:170986003,},ContainerImage{Names:[192.168.1.252/library/kube-controller-manager@sha256:8f03391c0d22e3da8d22725178efe3c4338e1920504f7f4eb4a1c7f5f40c4c6e 192.168.1.252/library/kube-controller-manager:v1.17.3],SizeBytes:160918035,},ContainerImage{Names:[192.168.1.252/library/kube-proxy@sha256:8dfd672298f8fbcd37731e0f2e45be87199cee6ee1f482ffec766693dcec0ec4 192.168.1.252/library/kube-proxy:v1.17.3],SizeBytes:115964919,},ContainerImage{Names:[192.168.1.252/library/kube-scheduler@sha256:eea535b58da5b2fb66b32c61c6913d014cf85061a39527ad9bca2fa84b53dc1c 192.168.1.252/library/kube-scheduler:v1.17.3],SizeBytes:94435859,},ContainerImage{Names:[sonobuoy/sonobuoy@sha256:73f9cfe546ac6d0d5b94308293484c29c3b02fc341a975cfca97c80dd8728ed7 sonobuoy/sonobuoy:v0.17.2],SizeBytes:84339798,},ContainerImage{Names:[192.168.1.252/library/k8s-haproxy@sha256:f88bca67a2782e7bd88af168d5c11216c32104d12fe9240fac54a1d3196e3f9c 192.168.1.252/library/k8s-haproxy:2.0.0],SizeBytes:73550856,},ContainerImage{Names:[192.168.1.252/library/flannel@sha256:bd76b84c74ad70368a2341c2402841b75950df881388e43fc2aca000c546653a 192.168.1.252/library/flannel:v0.11.0-amd64],SizeBytes:52567296,},ContainerImage{Names:[192.168.1.252/library/coredns@sha256:608ac7ccba5ce41c6941fca13bc67059c1eef927fd968b554b790e21cc92543c 192.168.1.252/library/coredns:1.6.5],SizeBytes:41578211,},ContainerImage{Names:[192.168.1.252/library/pause@sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d 192.168.1.252/library/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} | |
Mar 6 03:13:11.762: INFO: | |
Logging kubelet events for node master02 | |
Mar 6 03:13:11.766: INFO: | |
Logging pods the kubelet thinks is on node master02 | |
Mar 6 03:13:11.776: INFO: kube-apiserver-master02 started at 2020-03-06 02:29:50 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 03:13:11.776: INFO: Container kube-apiserver ready: true, restart count 0 | |
Mar 6 03:13:11.776: INFO: kube-controller-manager-master02 started at 2020-03-06 02:29:50 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 03:13:11.776: INFO: Container kube-controller-manager ready: true, restart count 1 | |
Mar 6 03:13:11.776: INFO: kube-scheduler-master02 started at 2020-03-06 02:29:23 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 03:13:11.776: INFO: Container kube-scheduler ready: true, restart count 1 | |
Mar 6 03:13:11.776: INFO: kube-proxy-scdss started at 2020-03-06 02:30:00 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 03:13:11.776: INFO: Container kube-proxy ready: true, restart count 0 | |
Mar 6 03:13:11.776: INFO: kube-flannel-ds-amd64-vfl78 started at 2020-03-06 02:30:00 +0000 UTC (1+1 container statuses recorded) | |
Mar 6 03:13:11.776: INFO: Init container install-cni ready: true, restart count 0 | |
Mar 6 03:13:11.776: INFO: Container kube-flannel ready: true, restart count 0 | |
Mar 6 03:13:11.776: INFO: coredns-7795996659-phdkc started at 2020-03-06 02:30:13 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 03:13:11.776: INFO: Container coredns ready: true, restart count 0 | |
Mar 6 03:13:11.776: INFO: sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-2zmwm started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded) | |
Mar 6 03:13:11.776: INFO: Container sonobuoy-worker ready: true, restart count 0 | |
Mar 6 03:13:11.776: INFO: Container systemd-logs ready: true, restart count 0 | |
W0306 03:13:11.779136 19 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. | |
Mar 6 03:13:11.811: INFO: | |
Latency metrics for node master02 | |
Mar 6 03:13:11.811: INFO: | |
Logging node info for node master03 | |
Mar 6 03:13:11.815: INFO: Node Info: &Node{ObjectMeta:{master03 /api/v1/nodes/master03 c508ee4c-fe9d-4c73-a857-e57fba26fa86 11340 0 2020-03-06 02:29:17 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master03 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"4a:aa:08:ea:16:90"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:192.168.1.249 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{42140479488 0} {<nil>} 41152812Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3823226880 0} {<nil>} 3733620Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{37926431477 0} {<nil>} 37926431477 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3718369280 0} {<nil>} 3631220Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-03-06 03:08:56 +0000 UTC,LastTransitionTime:2020-03-06 02:29:17 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-03-06 03:08:56 +0000 UTC,LastTransitionTime:2020-03-06 02:29:17 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-03-06 03:08:56 +0000 UTC,LastTransitionTime:2020-03-06 02:29:17 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-03-06 03:08:56 +0000 UTC,LastTransitionTime:2020-03-06 02:30:10 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:192.168.1.249,},NodeAddress{Type:Hostname,Address:master03,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:20200220105402131453637367482142,SystemUUID:0C8F0A67-EB7E-42DE-9418-0973FE735A08,BootID:05b1fa23-e6be-4032-bc93-8800264dff91,KernelVersion:3.10.0-1062.12.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.5,KubeletVersion:v1.17.3,KubeProxyVersion:v1.17.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[sonobuoy/systemd-logs@sha256:fadad24a66ddd544987c38811108a73d1a306dd3b5e3f090b207786f2825ffde sonobuoy/systemd-logs:v0.3],SizeBytes:297365055,},ContainerImage{Names:[192.168.1.252/library/etcd-amd64@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216 192.168.1.252/library/etcd-amd64:3.4.3-0],SizeBytes:288426917,},ContainerImage{Names:[192.168.1.252/library/kube-apiserver@sha256:4ee4113bcce32ae436b15364b09a2439dee15f6a19bbcf43a470d3dc879b0c62 192.168.1.252/library/kube-apiserver:v1.17.3],SizeBytes:170986003,},ContainerImage{Names:[192.168.1.252/library/kube-controller-manager@sha256:8f03391c0d22e3da8d22725178efe3c4338e1920504f7f4eb4a1c7f5f40c4c6e 192.168.1.252/library/kube-controller-manager:v1.17.3],SizeBytes:160918035,},ContainerImage{Names:[192.168.1.252/library/dashboard@sha256:4e0d39dae7e089b77fe2bbcef648f89905716db9c1f0884950bfd42d9f446c29 192.168.1.252/library/dashboard:v2.0.0-rc5],SizeBytes:126359420,},ContainerImage{Names:[192.168.1.252/library/kube-proxy@sha256:8dfd672298f8fbcd37731e0f2e45be87199cee6ee1f482ffec766693dcec0ec4 192.168.1.252/library/kube-proxy:v1.17.3],SizeBytes:115964919,},ContainerImage{Names:[192.168.1.252/library/kube-scheduler@sha256:eea535b58da5b2fb66b32c61c6913d014cf85061a39527ad9bca2fa84b53dc1c 192.168.1.252/library/kube-scheduler:v1.17.3],SizeBytes:94435859,},ContainerImage{Names:[sonobuoy/sonobuoy@sha256:73f9cfe546ac6d0d5b94308293484c29c3b02fc341a975cfca97c80dd8728ed7 sonobuoy/sonobuoy:v0.17.2],SizeBytes:84339798,},ContainerImage{Names:[192.168.1.252/library/k8s-haproxy@sha256:f88bca67a2782e7bd88af168d5c11216c32104d12fe9240fac54a1d3196e3f9c 192.168.1.252/library/k8s-haproxy:2.0.0],SizeBytes:73550856,},ContainerImage{Names:[192.168.1.252/library/flannel@sha256:bd76b84c74ad70368a2341c2402841b75950df881388e43fc2aca000c546653a 192.168.1.252/library/flannel:v0.11.0-amd64],SizeBytes:52567296,},ContainerImage{Names:[192.168.1.252/library/coredns@sha256:608ac7ccba5ce41c6941fca13bc67059c1eef927fd968b554b790e21cc92543c 192.168.1.252/library/coredns:1.6.5],SizeBytes:41578211,},ContainerImage{Names:[192.168.1.252/library/metrics-scraper@sha256:e24a74b3b1cdc84d6285d507a12eb06907fd8c457b3e8ae9baa9418eca43efc4 192.168.1.252/library/metrics-scraper:v1.0.3],SizeBytes:40105664,},ContainerImage{Names:[192.168.1.252/library/pause@sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d 192.168.1.252/library/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} | |
Mar 6 03:13:11.815: INFO: | |
Logging kubelet events for node master03 | |
Mar 6 03:13:11.820: INFO: | |
Logging pods the kubelet thinks is on node master03 | |
Mar 6 03:13:11.833: INFO: kube-apiserver-master03 started at 2020-03-06 02:29:24 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 03:13:11.833: INFO: Container kube-apiserver ready: true, restart count 0 | |
Mar 6 03:13:11.833: INFO: kube-scheduler-master03 started at 2020-03-06 02:29:38 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 03:13:11.833: INFO: Container kube-scheduler ready: true, restart count 1 | |
Mar 6 03:13:11.833: INFO: kube-proxy-stbnn started at 2020-03-06 02:30:00 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 03:13:11.833: INFO: Container kube-proxy ready: true, restart count 0 | |
Mar 6 03:13:11.833: INFO: kubernetes-dashboard-6647798d59-j2ms4 started at 2020-03-06 02:30:10 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 03:13:11.833: INFO: Container kubernetes-dashboard ready: true, restart count 0 | |
Mar 6 03:13:11.833: INFO: coredns-7795996659-cmq4d started at 2020-03-06 02:30:13 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 03:13:11.833: INFO: Container coredns ready: true, restart count 0 | |
Mar 6 03:13:11.833: INFO: sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-w5psq started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded) | |
Mar 6 03:13:11.833: INFO: Container sonobuoy-worker ready: true, restart count 0 | |
Mar 6 03:13:11.833: INFO: Container systemd-logs ready: true, restart count 0 | |
Mar 6 03:13:11.833: INFO: kube-controller-manager-master03 started at 2020-03-06 02:29:50 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 03:13:11.833: INFO: Container kube-controller-manager ready: true, restart count 1 | |
Mar 6 03:13:11.833: INFO: kube-flannel-ds-amd64-hs69k started at 2020-03-06 02:30:00 +0000 UTC (1+1 container statuses recorded) | |
Mar 6 03:13:11.833: INFO: Init container install-cni ready: true, restart count 0 | |
Mar 6 03:13:11.833: INFO: Container kube-flannel ready: true, restart count 0 | |
Mar 6 03:13:11.833: INFO: dashboard-metrics-scraper-56568cb9d7-d57kl started at 2020-03-06 02:30:10 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 03:13:11.833: INFO: Container dashboard-metrics-scraper ready: true, restart count 0 | |
W0306 03:13:11.835874 19 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. | |
Mar 6 03:13:11.856: INFO: | |
Latency metrics for node master03 | |
Mar 6 03:13:11.856: INFO: | |
Logging node info for node worker01 | |
Mar 6 03:13:11.858: INFO: Node Info: &Node{ObjectMeta:{worker01 /api/v1/nodes/worker01 cf4203bb-1bfa-4b35-991f-935275b6bc46 12224 0 2020-03-06 02:30:30 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:worker01 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"5a:49:f5:5b:74:b3"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:192.168.1.250 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{42140479488 0} {<nil>} 41152812Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3823214592 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{37926431477 0} {<nil>} 37926431477 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3718356992 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-03-06 03:11:21 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-03-06 03:11:21 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-03-06 03:11:21 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-03-06 03:11:21 +0000 UTC,LastTransitionTime:2020-03-06 02:30:45 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:192.168.1.250,},NodeAddress{Type:Hostname,Address:worker01,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:20200220105402131453637367482142,SystemUUID:5CA364AA-0FF8-4B57-BA86-F28699575F0D,BootID:c85ad0c4-ebcf-4d01-97f0-a36c1cfc50be,KernelVersion:3.10.0-1062.12.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.5,KubeletVersion:v1.17.3,KubeProxyVersion:v1.17.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[sonobuoy/systemd-logs@sha256:fadad24a66ddd544987c38811108a73d1a306dd3b5e3f090b207786f2825ffde sonobuoy/systemd-logs:v0.3],SizeBytes:297365055,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[192.168.1.252/library/kube-proxy@sha256:8dfd672298f8fbcd37731e0f2e45be87199cee6ee1f482ffec766693dcec0ec4 192.168.1.252/library/kube-proxy:v1.17.3],SizeBytes:115964919,},ContainerImage{Names:[sonobuoy/sonobuoy@sha256:73f9cfe546ac6d0d5b94308293484c29c3b02fc341a975cfca97c80dd8728ed7 sonobuoy/sonobuoy:v0.17.2],SizeBytes:84339798,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5 gcr.io/kubernetes-e2e-test-images/agnhost:2.8],SizeBytes:52800335,},ContainerImage{Names:[192.168.1.252/library/flannel@sha256:bd76b84c74ad70368a2341c2402841b75950df881388e43fc2aca000c546653a 192.168.1.252/library/flannel:v0.11.0-amd64],SizeBytes:52567296,},ContainerImage{Names:[192.168.1.252/library/metrics-server-amd64@sha256:c9c4e95068b51d6b33a9dccc61875df07dc650abbf4ac1a19d58b4628f89288b 192.168.1.252/library/metrics-server-amd64:v0.3.6],SizeBytes:39944451,},ContainerImage{Names:[192.168.1.252/library/kuard-amd64@sha256:bd17153e9a3319f401acc7a27759243f37d422c06cbbf01cb3e1f54bbbfe14f4 192.168.1.252/library/kuard-amd64:1],SizeBytes:19745911,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[192.168.1.252/library/pause@sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d 192.168.1.252/library/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} | |
Mar 6 03:13:11.858: INFO: | |
Logging kubelet events for node worker01 | |
Mar 6 03:13:11.864: INFO: | |
Logging pods the kubelet thinks is on node worker01 | |
Mar 6 03:13:11.873: INFO: kube-proxy-kcb8f started at 2020-03-06 02:30:30 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 03:13:11.873: INFO: Container kube-proxy ready: true, restart count 0 | |
Mar 6 03:13:11.873: INFO: contour-certgen-82k46 started at 2020-03-06 02:30:46 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 03:13:11.873: INFO: Container contour ready: false, restart count 0 | |
Mar 6 03:13:11.873: INFO: contour-54748c65f5-gk5sz started at 2020-03-06 02:30:51 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 03:13:11.873: INFO: Container contour ready: false, restart count 0 | |
Mar 6 03:13:11.873: INFO: sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-2bz8g started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded) | |
Mar 6 03:13:11.873: INFO: Container sonobuoy-worker ready: true, restart count 0 | |
Mar 6 03:13:11.873: INFO: Container systemd-logs ready: true, restart count 0 | |
Mar 6 03:13:11.873: INFO: kube-flannel-ds-amd64-xxhz9 started at 2020-03-06 02:30:30 +0000 UTC (1+1 container statuses recorded) | |
Mar 6 03:13:11.873: INFO: Init container install-cni ready: true, restart count 0 | |
Mar 6 03:13:11.873: INFO: Container kube-flannel ready: true, restart count 1 | |
Mar 6 03:13:11.873: INFO: kuard-678c676f5d-vsn86 started at 2020-03-06 02:30:51 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 03:13:11.873: INFO: Container kuard ready: true, restart count 0 | |
Mar 6 03:13:11.873: INFO: envoy-lvmcb started at 2020-03-06 02:30:45 +0000 UTC (1+1 container statuses recorded) | |
Mar 6 03:13:11.873: INFO: Init container envoy-initconfig ready: false, restart count 0 | |
Mar 6 03:13:11.873: INFO: Container envoy ready: false, restart count 0 | |
Mar 6 03:13:11.873: INFO: kuard-678c676f5d-m29b6 started at 2020-03-06 02:30:49 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 03:13:11.873: INFO: Container kuard ready: true, restart count 0 | |
Mar 6 03:13:11.873: INFO: kuard-678c676f5d-tzsnn started at 2020-03-06 02:30:51 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 03:13:11.873: INFO: Container kuard ready: true, restart count 0 | |
Mar 6 03:13:11.873: INFO: contour-54748c65f5-jl5wz started at 2020-03-06 02:30:46 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 03:13:11.873: INFO: Container contour ready: false, restart count 0 | |
Mar 6 03:13:11.873: INFO: metrics-server-78799bf646-xrsnn started at 2020-03-06 02:30:51 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 03:13:11.873: INFO: Container metrics-server ready: true, restart count 0 | |
W0306 03:13:11.876614 19 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. | |
Mar 6 03:13:11.893: INFO: | |
Latency metrics for node worker01 | |
Mar 6 03:13:11.893: INFO: | |
Logging node info for node worker02 | |
Mar 6 03:13:11.895: INFO: Node Info: &Node{ObjectMeta:{worker02 /api/v1/nodes/worker02 f0994ba1-7e4e-4cc8-b3c8-25d34b25d9ce 11322 0 2020-03-06 02:30:30 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:worker02 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"1a:75:0a:e8:cc:76"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:192.168.1.251 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{42140479488 0} {<nil>} 41152812Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3823214592 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{37926431477 0} {<nil>} 37926431477 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3718356992 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-03-06 03:08:50 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-03-06 03:08:50 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-03-06 03:08:50 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-03-06 03:08:50 +0000 UTC,LastTransitionTime:2020-03-06 02:30:55 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:192.168.1.251,},NodeAddress{Type:Hostname,Address:worker02,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:20200220105402131453637367482142,SystemUUID:EDBF7E33-228B-4233-93CF-7850B5A311E4,BootID:bd6a4f0f-5ddb-4585-83df-253b9292b617,KernelVersion:3.10.0-1062.12.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.5,KubeletVersion:v1.17.3,KubeProxyVersion:v1.17.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/google-containers/conformance@sha256:502434491cbc3fac5d9a606a879e554cf881b2ba5b688bed25f2c33d3ff1c777 gcr.io/google-containers/conformance:v1.17.3],SizeBytes:575831882,},ContainerImage{Names:[sonobuoy/systemd-logs@sha256:fadad24a66ddd544987c38811108a73d1a306dd3b5e3f090b207786f2825ffde sonobuoy/systemd-logs:v0.3],SizeBytes:297365055,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:195659796,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[192.168.1.252/library/kube-proxy@sha256:8dfd672298f8fbcd37731e0f2e45be87199cee6ee1f482ffec766693dcec0ec4 192.168.1.252/library/kube-proxy:v1.17.3],SizeBytes:115964919,},ContainerImage{Names:[sonobuoy/sonobuoy@sha256:73f9cfe546ac6d0d5b94308293484c29c3b02fc341a975cfca97c80dd8728ed7 sonobuoy/sonobuoy:v0.17.2],SizeBytes:84339798,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5 gcr.io/kubernetes-e2e-test-images/agnhost:2.8],SizeBytes:52800335,},ContainerImage{Names:[192.168.1.252/library/flannel@sha256:bd76b84c74ad70368a2341c2402841b75950df881388e43fc2aca000c546653a 192.168.1.252/library/flannel:v0.11.0-amd64],SizeBytes:52567296,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd gcr.io/kubernetes-e2e-test-images/dnsutils:1.1],SizeBytes:9349974,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0],SizeBytes:4732240,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0],SizeBytes:1450451,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[192.168.1.252/library/pause@sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea 192.168.1.252/library/pause:3.1 k8s.gcr.io/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} | |
Mar 6 03:13:11.896: INFO: | |
Logging kubelet events for node worker02 | |
Mar 6 03:13:11.900: INFO: | |
Logging pods the kubelet thinks is on node worker02 | |
Mar 6 03:13:11.905: INFO: kube-proxy-5xxdb started at 2020-03-06 02:30:30 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 03:13:11.905: INFO: Container kube-proxy ready: true, restart count 1 | |
Mar 6 03:13:11.905: INFO: sonobuoy started at 2020-03-06 02:38:02 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 03:13:11.905: INFO: Container kube-sonobuoy ready: true, restart count 0 | |
Mar 6 03:13:11.905: INFO: kube-flannel-ds-amd64-ztfzf started at 2020-03-06 02:30:30 +0000 UTC (1+1 container statuses recorded) | |
Mar 6 03:13:11.905: INFO: Init container install-cni ready: true, restart count 0 | |
Mar 6 03:13:11.905: INFO: Container kube-flannel ready: true, restart count 0 | |
Mar 6 03:13:11.905: INFO: envoy-wgz76 started at 2020-03-06 02:30:55 +0000 UTC (1+1 container statuses recorded) | |
Mar 6 03:13:11.905: INFO: Init container envoy-initconfig ready: false, restart count 0 | |
Mar 6 03:13:11.905: INFO: Container envoy ready: false, restart count 0 | |
Mar 6 03:13:11.905: INFO: sonobuoy-e2e-job-67137ff64ac145d3 started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded) | |
Mar 6 03:13:11.905: INFO: Container e2e ready: true, restart count 0 | |
Mar 6 03:13:11.905: INFO: Container sonobuoy-worker ready: true, restart count 0 | |
Mar 6 03:13:11.905: INFO: sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-bpjtd started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded) | |
Mar 6 03:13:11.905: INFO: Container sonobuoy-worker ready: true, restart count 0 | |
Mar 6 03:13:11.905: INFO: Container systemd-logs ready: true, restart count 0 | |
Mar 6 03:13:11.905: INFO: sample-webhook-deployment-5f65f8c764-2skpg started at 2020-03-06 03:12:22 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 03:13:11.905: INFO: Container sample-webhook ready: true, restart count 0 | |
W0306 03:13:11.907709 19 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. | |
Mar 6 03:13:11.943: INFO: | |
Latency metrics for node worker02 | |
Mar 6 03:13:11.943: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
STEP: Destroying namespace "webhook-1905" for this suite. | |
STEP: Destroying namespace "webhook-1905-markers" for this suite. | |
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 | |
• Failure [50.322 seconds] | |
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 | |
should be able to deny custom resource creation, update and deletion [Conformance] [It] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
Mar 6 03:13:11.203: waiting for webhook configuration to be ready | |
Unexpected error: | |
<*errors.errorString | 0xc0000b3950>: { | |
s: "timed out waiting for the condition", | |
} | |
timed out waiting for the condition | |
occurred | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:1788 | |
------------------------------ | |
{"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":278,"completed":79,"skipped":1512,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]} | |
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS | |
------------------------------ | |
[k8s.io] InitContainer [NodeConformance] | |
should invoke init containers on a RestartNever pod [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
[BeforeEach] [k8s.io] InitContainer [NodeConformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 | |
STEP: Creating a kubernetes client | |
Mar 6 03:13:12.020: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 | |
STEP: Building a namespace api object, basename init-container | |
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in init-container-8313 | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] [k8s.io] InitContainer [NodeConformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 | |
[It] should invoke init containers on a RestartNever pod [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
STEP: creating the pod | |
Mar 6 03:13:12.162: INFO: PodSpec: initContainers in spec.initContainers | |
[AfterEach] [k8s.io] InitContainer [NodeConformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 | |
Mar 6 03:13:15.458: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
STEP: Destroying namespace "init-container-8313" for this suite. | |
•{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":278,"completed":80,"skipped":1565,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]} | |
SSSSSSSSSS | |
------------------------------ | |
[sig-api-machinery] Secrets | |
should fail to create secret due to empty secret key [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
[BeforeEach] [sig-api-machinery] Secrets | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 | |
STEP: Creating a kubernetes client | |
Mar 6 03:13:15.466: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 | |
STEP: Building a namespace api object, basename secrets | |
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secrets-7320 | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] should fail to create secret due to empty secret key [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
STEP: Creating projection with secret that has name secret-emptykey-test-1aca2271-9fcd-4909-8fc1-4ac3149c3d85 | |
[AfterEach] [sig-api-machinery] Secrets | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 | |
Mar 6 03:13:15.598: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
STEP: Destroying namespace "secrets-7320" for this suite. | |
•{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":278,"completed":81,"skipped":1575,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]} | |
SSSSSSS | |
------------------------------ | |
[k8s.io] InitContainer [NodeConformance] | |
should not start app containers if init containers fail on a RestartAlways pod [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
[BeforeEach] [k8s.io] InitContainer [NodeConformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 | |
STEP: Creating a kubernetes client | |
Mar 6 03:13:15.605: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 | |
STEP: Building a namespace api object, basename init-container | |
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in init-container-4664 | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] [k8s.io] InitContainer [NodeConformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 | |
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
STEP: creating the pod | |
Mar 6 03:13:15.743: INFO: PodSpec: initContainers in spec.initContainers | |
Mar 6 03:13:59.829: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-1b53fd52-9d57-466e-a3cb-a06caf734d5c", GenerateName:"", Namespace:"init-container-4664", SelfLink:"/api/v1/namespaces/init-container-4664/pods/pod-init-1b53fd52-9d57-466e-a3cb-a06caf734d5c", UID:"07e11a80-c959-40fc-bf44-2864ae2d5570", ResourceVersion:"13065", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63719061195, loc:(*time.Location)(0x7db7bc0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"743765664"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-dl88b", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc002c79880), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-dl88b", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-dl88b", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-dl88b", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002c00438), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"worker02", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0028afc80), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002c004c0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002c004e0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc002c004e8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc002c004ec), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719061195, loc:(*time.Location)(0x7db7bc0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719061195, loc:(*time.Location)(0x7db7bc0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719061195, loc:(*time.Location)(0x7db7bc0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719061195, loc:(*time.Location)(0x7db7bc0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"192.168.1.251", PodIP:"10.244.3.101", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.3.101"}}, StartTime:(*v1.Time)(0xc0030c2120), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0027edd50)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0027eddc0)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://72ba66edb19bab04d4a5117ad0f1ff9c8f541503a6d74743683bd8e2ee54731a", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0030c2160), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0030c2140), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:"", Started:(*bool)(0xc002c0056f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} | |
[AfterEach] [k8s.io] InitContainer [NodeConformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 | |
Mar 6 03:13:59.830: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
STEP: Destroying namespace "init-container-4664" for this suite. | |
• [SLOW TEST:44.232 seconds] | |
[k8s.io] InitContainer [NodeConformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 | |
should not start app containers if init containers fail on a RestartAlways pod [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
------------------------------ | |
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":278,"completed":82,"skipped":1582,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]} | |
SSSSSSSSSSSSSSSSSSSSSSSS | |
------------------------------ | |
[sig-cli] Kubectl client Kubectl run --rm job | |
should create a job from an image, then delete the job [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
[BeforeEach] [sig-cli] Kubectl client | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 | |
STEP: Creating a kubernetes client | |
Mar 6 03:13:59.837: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 | |
STEP: Building a namespace api object, basename kubectl | |
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-6668 | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] [sig-cli] Kubectl client | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 | |
[It] should create a job from an image, then delete the job [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
STEP: executing a command with run --rm and attach with stdin | |
Mar 6 03:13:59.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 --namespace=kubectl-6668 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' | |
Mar 6 03:14:17.115: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\n" | |
Mar 6 03:14:17.115: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" | |
STEP: verifying the job e2e-test-rm-busybox-job was deleted | |
[AfterEach] [sig-cli] Kubectl client | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 | |
Mar 6 03:14:19.122: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
STEP: Destroying namespace "kubectl-6668" for this suite. | |
• [SLOW TEST:19.292 seconds] | |
[sig-cli] Kubectl client | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 | |
Kubectl run --rm job | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1944 | |
should create a job from an image, then delete the job [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
------------------------------ | |
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job [Conformance]","total":278,"completed":83,"skipped":1606,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]} | |
SSSSSSSSSSSSSSSSSS | |
------------------------------ | |
[k8s.io] [sig-node] PreStop | |
should call prestop when killing a pod [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
[BeforeEach] [k8s.io] [sig-node] PreStop | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 | |
STEP: Creating a kubernetes client | |
Mar 6 03:14:19.129: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 | |
STEP: Building a namespace api object, basename prestop | |
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in prestop-5832 | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] [k8s.io] [sig-node] PreStop | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:172 | |
[It] should call prestop when killing a pod [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
STEP: Creating server pod server in namespace prestop-5832 | |
STEP: Waiting for pods to come up. | |
STEP: Creating tester pod tester in namespace prestop-5832 | |
STEP: Deleting pre-stop pod | |
Mar 6 03:14:28.289: INFO: Saw: { | |
"Hostname": "server", | |
"Sent": null, | |
"Received": { | |
"prestop": 1 | |
}, | |
"Errors": null, | |
"Log": [ | |
"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", | |
"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." | |
], | |
"StillContactingPeers": true | |
} | |
STEP: Deleting the server pod | |
[AfterEach] [k8s.io] [sig-node] PreStop | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 | |
Mar 6 03:14:28.293: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
STEP: Destroying namespace "prestop-5832" for this suite. | |
• [SLOW TEST:9.173 seconds] | |
[k8s.io] [sig-node] PreStop | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 | |
should call prestop when killing a pod [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
------------------------------ | |
{"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":278,"completed":84,"skipped":1624,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]} | |
SSSSSSSSSSSSSSSSSSSSSSSSSSSS | |
------------------------------ | |
[sig-scheduling] SchedulerPredicates [Serial] | |
validates resource limits of pods that are allowed to run [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 | |
STEP: Creating a kubernetes client | |
Mar 6 03:14:28.302: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 | |
STEP: Building a namespace api object, basename sched-pred | |
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in sched-pred-4269 | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 | |
Mar 6 03:14:28.438: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready | |
Mar 6 03:14:28.446: INFO: Waiting for terminating namespaces to be deleted... | |
Mar 6 03:14:28.451: INFO: | |
Logging pods the kubelet thinks is on node worker01 before test | |
Mar 6 03:14:28.457: INFO: contour-54748c65f5-jl5wz from projectcontour started at 2020-03-06 02:30:46 +0000 UTC (1 container statuses recorded) | |
Mar 6 03:14:28.457: INFO: Container contour ready: false, restart count 0 | |
Mar 6 03:14:28.457: INFO: metrics-server-78799bf646-xrsnn from kube-system started at 2020-03-06 02:30:51 +0000 UTC (1 container statuses recorded) | |
Mar 6 03:14:28.457: INFO: Container metrics-server ready: true, restart count 0 | |
Mar 6 03:14:28.457: INFO: kube-proxy-kcb8f from kube-system started at 2020-03-06 02:30:30 +0000 UTC (1 container statuses recorded) | |
Mar 6 03:14:28.457: INFO: Container kube-proxy ready: true, restart count 0 | |
Mar 6 03:14:28.457: INFO: contour-certgen-82k46 from projectcontour started at 2020-03-06 02:30:46 +0000 UTC (1 container statuses recorded) | |
Mar 6 03:14:28.457: INFO: Container contour ready: false, restart count 0 | |
Mar 6 03:14:28.457: INFO: contour-54748c65f5-gk5sz from projectcontour started at 2020-03-06 02:30:51 +0000 UTC (1 container statuses recorded) | |
Mar 6 03:14:28.457: INFO: Container contour ready: false, restart count 0 | |
Mar 6 03:14:28.457: INFO: sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-2bz8g from sonobuoy started at 2020-03-06 02:38:12 +0000 UTC (2 container statuses recorded) | |
Mar 6 03:14:28.457: INFO: Container sonobuoy-worker ready: true, restart count 0 | |
Mar 6 03:14:28.457: INFO: Container systemd-logs ready: true, restart count 0 | |
Mar 6 03:14:28.457: INFO: kube-flannel-ds-amd64-xxhz9 from kube-system started at 2020-03-06 02:30:30 +0000 UTC (1 container statuses recorded) | |
Mar 6 03:14:28.457: INFO: Container kube-flannel ready: true, restart count 1 | |
Mar 6 03:14:28.457: INFO: kuard-678c676f5d-vsn86 from default started at 2020-03-06 02:30:51 +0000 UTC (1 container statuses recorded) | |
Mar 6 03:14:28.457: INFO: Container kuard ready: true, restart count 0 | |
Mar 6 03:14:28.457: INFO: envoy-lvmcb from projectcontour started at 2020-03-06 02:30:45 +0000 UTC (1 container statuses recorded) | |
Mar 6 03:14:28.457: INFO: Container envoy ready: false, restart count 0 | |
Mar 6 03:14:28.457: INFO: kuard-678c676f5d-m29b6 from default started at 2020-03-06 02:30:49 +0000 UTC (1 container statuses recorded) | |
Mar 6 03:14:28.457: INFO: Container kuard ready: true, restart count 0 | |
Mar 6 03:14:28.457: INFO: kuard-678c676f5d-tzsnn from default started at 2020-03-06 02:30:51 +0000 UTC (1 container statuses recorded) | |
Mar 6 03:14:28.457: INFO: Container kuard ready: true, restart count 0 | |
Mar 6 03:14:28.457: INFO: | |
Logging pods the kubelet thinks is on node worker02 before test | |
Mar 6 03:14:28.464: INFO: kube-proxy-5xxdb from kube-system started at 2020-03-06 02:30:30 +0000 UTC (1 container statuses recorded) | |
Mar 6 03:14:28.464: INFO: Container kube-proxy ready: true, restart count 1 | |
Mar 6 03:14:28.464: INFO: sonobuoy from sonobuoy started at 2020-03-06 02:38:02 +0000 UTC (1 container statuses recorded) | |
Mar 6 03:14:28.464: INFO: Container kube-sonobuoy ready: true, restart count 0 | |
Mar 6 03:14:28.464: INFO: tester from prestop-5832 started at 2020-03-06 03:14:21 +0000 UTC (1 container statuses recorded) | |
Mar 6 03:14:28.464: INFO: Container tester ready: true, restart count 0 | |
Mar 6 03:14:28.464: INFO: kube-flannel-ds-amd64-ztfzf from kube-system started at 2020-03-06 02:30:30 +0000 UTC (1 container statuses recorded) | |
Mar 6 03:14:28.464: INFO: Container kube-flannel ready: true, restart count 0 | |
Mar 6 03:14:28.464: INFO: envoy-wgz76 from projectcontour started at 2020-03-06 02:30:55 +0000 UTC (1 container statuses recorded) | |
Mar 6 03:14:28.464: INFO: Container envoy ready: false, restart count 0 | |
Mar 6 03:14:28.464: INFO: sonobuoy-e2e-job-67137ff64ac145d3 from sonobuoy started at 2020-03-06 02:38:12 +0000 UTC (2 container statuses recorded) | |
Mar 6 03:14:28.464: INFO: Container e2e ready: true, restart count 0 | |
Mar 6 03:14:28.464: INFO: Container sonobuoy-worker ready: true, restart count 0 | |
Mar 6 03:14:28.464: INFO: sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-bpjtd from sonobuoy started at 2020-03-06 02:38:12 +0000 UTC (2 container statuses recorded) | |
Mar 6 03:14:28.464: INFO: Container sonobuoy-worker ready: true, restart count 0 | |
Mar 6 03:14:28.464: INFO: Container systemd-logs ready: true, restart count 0 | |
Mar 6 03:14:28.464: INFO: e2e-test-rm-busybox-job-gz74m from kubectl-6668 started at 2020-03-06 03:14:05 +0000 UTC (1 container statuses recorded) | |
Mar 6 03:14:28.464: INFO: Container e2e-test-rm-busybox-job ready: false, restart count 0 | |
Mar 6 03:14:28.464: INFO: server from prestop-5832 started at 2020-03-06 03:14:19 +0000 UTC (1 container statuses recorded) | |
Mar 6 03:14:28.464: INFO: Container server ready: true, restart count 0 | |
[It] validates resource limits of pods that are allowed to run [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
STEP: verifying the node has the label node worker01 | |
STEP: verifying the node has the label node worker02 | |
Mar 6 03:14:28.490: INFO: Pod kuard-678c676f5d-m29b6 requesting resource cpu=0m on Node worker01 | |
Mar 6 03:14:28.490: INFO: Pod kuard-678c676f5d-tzsnn requesting resource cpu=0m on Node worker01 | |
Mar 6 03:14:28.490: INFO: Pod kuard-678c676f5d-vsn86 requesting resource cpu=0m on Node worker01 | |
Mar 6 03:14:28.490: INFO: Pod kube-flannel-ds-amd64-xxhz9 requesting resource cpu=100m on Node worker01 | |
Mar 6 03:14:28.490: INFO: Pod kube-flannel-ds-amd64-ztfzf requesting resource cpu=100m on Node worker02 | |
Mar 6 03:14:28.490: INFO: Pod kube-proxy-5xxdb requesting resource cpu=0m on Node worker02 | |
Mar 6 03:14:28.490: INFO: Pod kube-proxy-kcb8f requesting resource cpu=0m on Node worker01 | |
Mar 6 03:14:28.490: INFO: Pod metrics-server-78799bf646-xrsnn requesting resource cpu=0m on Node worker01 | |
Mar 6 03:14:28.490: INFO: Pod server requesting resource cpu=0m on Node worker02 | |
Mar 6 03:14:28.490: INFO: Pod tester requesting resource cpu=0m on Node worker02 | |
Mar 6 03:14:28.490: INFO: Pod contour-54748c65f5-gk5sz requesting resource cpu=0m on Node worker01 | |
Mar 6 03:14:28.490: INFO: Pod contour-54748c65f5-jl5wz requesting resource cpu=0m on Node worker01 | |
Mar 6 03:14:28.490: INFO: Pod contour-certgen-82k46 requesting resource cpu=0m on Node worker01 | |
Mar 6 03:14:28.490: INFO: Pod envoy-lvmcb requesting resource cpu=0m on Node worker01 | |
Mar 6 03:14:28.490: INFO: Pod envoy-wgz76 requesting resource cpu=0m on Node worker02 | |
Mar 6 03:14:28.490: INFO: Pod sonobuoy requesting resource cpu=0m on Node worker02 | |
Mar 6 03:14:28.490: INFO: Pod sonobuoy-e2e-job-67137ff64ac145d3 requesting resource cpu=0m on Node worker02 | |
Mar 6 03:14:28.490: INFO: Pod sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-2bz8g requesting resource cpu=0m on Node worker01 | |
Mar 6 03:14:28.490: INFO: Pod sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-bpjtd requesting resource cpu=0m on Node worker02 | |
STEP: Starting Pods to consume most of the cluster CPU. | |
Mar 6 03:14:28.490: INFO: Creating a pod which consumes cpu=1330m on Node worker01 | |
Mar 6 03:14:28.496: INFO: Creating a pod which consumes cpu=1330m on Node worker02 | |
STEP: Creating another pod that requires unavailable amount of CPU. | |
STEP: Considering event: | |
Type = [Normal], Name = [filler-pod-cdcc7cba-6b00-4359-9282-43a0866185fc.15f9988b67c92bc1], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4269/filler-pod-cdcc7cba-6b00-4359-9282-43a0866185fc to worker02] | |
STEP: Considering event: | |
Type = [Normal], Name = [filler-pod-cdcc7cba-6b00-4359-9282-43a0866185fc.15f9988b8c545cb1], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] | |
STEP: Considering event: | |
Type = [Normal], Name = [filler-pod-cdcc7cba-6b00-4359-9282-43a0866185fc.15f9988b90220cce], Reason = [Created], Message = [Created container filler-pod-cdcc7cba-6b00-4359-9282-43a0866185fc] | |
STEP: Considering event: | |
Type = [Normal], Name = [filler-pod-cdcc7cba-6b00-4359-9282-43a0866185fc.15f9988b970563e8], Reason = [Started], Message = [Started container filler-pod-cdcc7cba-6b00-4359-9282-43a0866185fc] | |
STEP: Considering event: | |
Type = [Normal], Name = [filler-pod-f9acd092-0533-4e68-a356-3b010f7f400c.15f9988b677d306d], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4269/filler-pod-f9acd092-0533-4e68-a356-3b010f7f400c to worker01] | |
STEP: Considering event: | |
Type = [Normal], Name = [filler-pod-f9acd092-0533-4e68-a356-3b010f7f400c.15f9988b8c9659cb], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.1"] | |
STEP: Considering event: | |
Type = [Normal], Name = [filler-pod-f9acd092-0533-4e68-a356-3b010f7f400c.15f9988bbdd81df3], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.1"] | |
STEP: Considering event: | |
Type = [Normal], Name = [filler-pod-f9acd092-0533-4e68-a356-3b010f7f400c.15f9988bc11bafaf], Reason = [Created], Message = [Created container filler-pod-f9acd092-0533-4e68-a356-3b010f7f400c] | |
STEP: Considering event: | |
Type = [Normal], Name = [filler-pod-f9acd092-0533-4e68-a356-3b010f7f400c.15f9988bc8cda64b], Reason = [Started], Message = [Started container filler-pod-f9acd092-0533-4e68-a356-3b010f7f400c] | |
STEP: Considering event: | |
Type = [Warning], Name = [additional-pod.15f9988c5725097b], Reason = [FailedScheduling], Message = [0/5 nodes are available: 2 Insufficient cpu, 3 node(s) had taints that the pod didn't tolerate.] | |
STEP: Considering event: | |
Type = [Warning], Name = [additional-pod.15f9988c578ea714], Reason = [FailedScheduling], Message = [0/5 nodes are available: 2 Insufficient cpu, 3 node(s) had taints that the pod didn't tolerate.] | |
STEP: removing the label node off the node worker01 | |
STEP: verifying the node doesn't have the label node | |
STEP: removing the label node off the node worker02 | |
STEP: verifying the node doesn't have the label node | |
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 | |
Mar 6 03:14:33.549: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
STEP: Destroying namespace "sched-pred-4269" for this suite. | |
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 | |
• [SLOW TEST:5.254 seconds] | |
[sig-scheduling] SchedulerPredicates [Serial] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 | |
validates resource limits of pods that are allowed to run [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
------------------------------ | |
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":278,"completed":85,"skipped":1652,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]} | |
[sig-storage] Secrets | |
optional updates should be reflected in volume [NodeConformance] [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
[BeforeEach] [sig-storage] Secrets | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 | |
STEP: Creating a kubernetes client | |
Mar 6 03:14:33.556: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 | |
STEP: Building a namespace api object, basename secrets | |
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secrets-2718 | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] optional updates should be reflected in volume [NodeConformance] [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
STEP: Creating secret with name s-test-opt-del-7002cf1e-8aa0-4b99-87a9-87953e087c52 | |
STEP: Creating secret with name s-test-opt-upd-b806b348-982a-41e6-9838-46ef850c436f | |
STEP: Creating the pod | |
STEP: Deleting secret s-test-opt-del-7002cf1e-8aa0-4b99-87a9-87953e087c52 | |
STEP: Updating secret s-test-opt-upd-b806b348-982a-41e6-9838-46ef850c436f | |
STEP: Creating secret with name s-test-opt-create-858cb526-dd63-4d02-bc09-019e87bc45a4 | |
STEP: waiting to observe update in volume | |
[AfterEach] [sig-storage] Secrets | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 | |
Mar 6 03:14:37.758: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
STEP: Destroying namespace "secrets-2718" for this suite. | |
•{"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":86,"skipped":1652,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]} | |
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS | |
------------------------------ | |
[sig-api-machinery] Secrets | |
should be consumable via the environment [NodeConformance] [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
[BeforeEach] [sig-api-machinery] Secrets | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 | |
STEP: Creating a kubernetes client | |
Mar 6 03:14:37.766: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 | |
STEP: Building a namespace api object, basename secrets | |
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secrets-8777 | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] should be consumable via the environment [NodeConformance] [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
STEP: creating secret secrets-8777/secret-test-17cf8109-0030-4616-bb39-a1cbaa6d8145 | |
STEP: Creating a pod to test consume secrets | |
Mar 6 03:14:37.905: INFO: Waiting up to 5m0s for pod "pod-configmaps-e621dfc1-9b5a-4591-9ac3-1942565f221a" in namespace "secrets-8777" to be "success or failure" | |
Mar 6 03:14:37.910: INFO: Pod "pod-configmaps-e621dfc1-9b5a-4591-9ac3-1942565f221a": Phase="Pending", Reason="", readiness=false. Elapsed: 5.431302ms | |
Mar 6 03:14:39.914: INFO: Pod "pod-configmaps-e621dfc1-9b5a-4591-9ac3-1942565f221a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00939711s | |
Mar 6 03:14:41.917: INFO: Pod "pod-configmaps-e621dfc1-9b5a-4591-9ac3-1942565f221a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012186493s | |
Mar 6 03:14:43.919: INFO: Pod "pod-configmaps-e621dfc1-9b5a-4591-9ac3-1942565f221a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.014367103s | |
Mar 6 03:14:45.921: INFO: Pod "pod-configmaps-e621dfc1-9b5a-4591-9ac3-1942565f221a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.01666046s | |
STEP: Saw pod success | |
Mar 6 03:14:45.921: INFO: Pod "pod-configmaps-e621dfc1-9b5a-4591-9ac3-1942565f221a" satisfied condition "success or failure" | |
Mar 6 03:14:45.924: INFO: Trying to get logs from node worker01 pod pod-configmaps-e621dfc1-9b5a-4591-9ac3-1942565f221a container env-test: <nil> | |
STEP: delete the pod | |
Mar 6 03:14:45.936: INFO: Waiting for pod pod-configmaps-e621dfc1-9b5a-4591-9ac3-1942565f221a to disappear | |
Mar 6 03:14:45.938: INFO: Pod pod-configmaps-e621dfc1-9b5a-4591-9ac3-1942565f221a no longer exists | |
[AfterEach] [sig-api-machinery] Secrets | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 | |
Mar 6 03:14:45.938: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
STEP: Destroying namespace "secrets-8777" for this suite. | |
• [SLOW TEST:8.178 seconds] | |
[sig-api-machinery] Secrets | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 | |
should be consumable via the environment [NodeConformance] [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
------------------------------ | |
{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":87,"skipped":1698,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]} | |
SSSSSSS | |
------------------------------ | |
[k8s.io] Container Runtime blackbox test on terminated container | |
should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
[BeforeEach] [k8s.io] Container Runtime | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 | |
STEP: Creating a kubernetes client | |
Mar 6 03:14:45.945: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 | |
STEP: Building a namespace api object, basename container-runtime | |
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-runtime-1502 | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
STEP: create the container | |
STEP: wait for the container to reach Succeeded | |
STEP: get the container status | |
STEP: the container should be terminated | |
STEP: the termination message should be set | |
Mar 6 03:14:48.112: INFO: Expected: &{OK} to match Container's Termination Message: OK -- | |
STEP: delete the container | |
[AfterEach] [k8s.io] Container Runtime | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 | |
Mar 6 03:14:48.124: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
STEP: Destroying namespace "container-runtime-1502" for this suite. | |
•{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":88,"skipped":1705,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]} | |
SSSSSS | |
------------------------------ | |
[sig-storage] Projected downwardAPI | |
should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
[BeforeEach] [sig-storage] Projected downwardAPI | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 | |
STEP: Creating a kubernetes client | |
Mar 6 03:14:48.131: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 | |
STEP: Building a namespace api object, basename projected | |
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-336 | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] [sig-storage] Projected downwardAPI | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 | |
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
STEP: Creating a pod to test downward API volume plugin | |
Mar 6 03:14:48.266: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ad81a96d-8879-4230-b7a4-61c096d14159" in namespace "projected-336" to be "success or failure" | |
Mar 6 03:14:48.268: INFO: Pod "downwardapi-volume-ad81a96d-8879-4230-b7a4-61c096d14159": Phase="Pending", Reason="", readiness=false. Elapsed: 2.223805ms | |
Mar 6 03:14:50.270: INFO: Pod "downwardapi-volume-ad81a96d-8879-4230-b7a4-61c096d14159": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.004742588s | |
STEP: Saw pod success | |
Mar 6 03:14:50.270: INFO: Pod "downwardapi-volume-ad81a96d-8879-4230-b7a4-61c096d14159" satisfied condition "success or failure" | |
Mar 6 03:14:50.273: INFO: Trying to get logs from node worker01 pod downwardapi-volume-ad81a96d-8879-4230-b7a4-61c096d14159 container client-container: <nil> | |
STEP: delete the pod | |
Mar 6 03:14:50.290: INFO: Waiting for pod downwardapi-volume-ad81a96d-8879-4230-b7a4-61c096d14159 to disappear | |
Mar 6 03:14:50.292: INFO: Pod downwardapi-volume-ad81a96d-8879-4230-b7a4-61c096d14159 no longer exists | |
[AfterEach] [sig-storage] Projected downwardAPI | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 | |
Mar 6 03:14:50.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
STEP: Destroying namespace "projected-336" for this suite. | |
•{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":89,"skipped":1711,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]} | |
SSSSS | |
------------------------------ | |
[sig-network] Networking Granular Checks: Pods | |
should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
[BeforeEach] [sig-network] Networking | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 | |
STEP: Creating a kubernetes client | |
Mar 6 03:14:50.304: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 | |
STEP: Building a namespace api object, basename pod-network-test | |
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pod-network-test-2567 | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
STEP: Performing setup for networking test in namespace pod-network-test-2567 | |
STEP: creating a selector | |
STEP: Creating the service pods in kubernetes | |
Mar 6 03:14:50.439: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable | |
STEP: Creating test pods | |
Mar 6 03:15:12.493: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.108:8080/dial?request=hostname&protocol=udp&host=10.244.4.23&port=8081&tries=1'] Namespace:pod-network-test-2567 PodName:host-test-container-pod ContainerName:agnhost Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} | |
Mar 6 03:15:12.493: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 | |
Mar 6 03:15:12.603: INFO: Waiting for responses: map[] | |
Mar 6 03:15:12.605: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.108:8080/dial?request=hostname&protocol=udp&host=10.244.3.107&port=8081&tries=1'] Namespace:pod-network-test-2567 PodName:host-test-container-pod ContainerName:agnhost Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} | |
Mar 6 03:15:12.605: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 | |
Mar 6 03:15:12.749: INFO: Waiting for responses: map[] | |
[AfterEach] [sig-network] Networking | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 | |
Mar 6 03:15:12.749: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
STEP: Destroying namespace "pod-network-test-2567" for this suite. | |
• [SLOW TEST:22.453 seconds] | |
[sig-network] Networking | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 | |
Granular Checks: Pods | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 | |
should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
------------------------------ | |
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":90,"skipped":1716,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]} | |
S | |
------------------------------ | |
[sig-apps] ReplicationController | |
should adopt matching pods on creation [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
[BeforeEach] [sig-apps] ReplicationController | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 | |
STEP: Creating a kubernetes client | |
Mar 6 03:15:12.757: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 | |
STEP: Building a namespace api object, basename replication-controller | |
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in replication-controller-8478 | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] should adopt matching pods on creation [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
STEP: Given a Pod with a 'name' label pod-adoption is created | |
STEP: When a replication controller with a matching selector is created | |
STEP: Then the orphan pod is adopted | |
[AfterEach] [sig-apps] ReplicationController | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 | |
Mar 6 03:15:15.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
STEP: Destroying namespace "replication-controller-8478" for this suite. | |
•{"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":278,"completed":91,"skipped":1717,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]} | |
SS | |
------------------------------ | |
[sig-storage] Projected downwardAPI | |
should update labels on modification [NodeConformance] [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
[BeforeEach] [sig-storage] Projected downwardAPI | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 | |
STEP: Creating a kubernetes client | |
Mar 6 03:15:15.920: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 | |
STEP: Building a namespace api object, basename projected | |
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-2439 | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] [sig-storage] Projected downwardAPI | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 | |
[It] should update labels on modification [NodeConformance] [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
STEP: Creating the pod | |
Mar 6 03:15:18.593: INFO: Successfully updated pod "labelsupdate21c72232-a4f4-45c3-8090-647410d93d42" | |
[AfterEach] [sig-storage] Projected downwardAPI | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 | |
Mar 6 03:15:20.603: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
STEP: Destroying namespace "projected-2439" for this suite. | |
•{"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":92,"skipped":1719,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]} | |
SSSS | |
------------------------------ | |
[sig-api-machinery] ResourceQuota | |
should create a ResourceQuota and capture the life of a secret. [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
[BeforeEach] [sig-api-machinery] ResourceQuota | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 | |
STEP: Creating a kubernetes client | |
Mar 6 03:15:20.617: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 | |
STEP: Building a namespace api object, basename resourcequota | |
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in resourcequota-827 | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] should create a ResourceQuota and capture the life of a secret. [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
STEP: Discovering how many secrets are in namespace by default | |
STEP: Counting existing ResourceQuota | |
STEP: Creating a ResourceQuota | |
STEP: Ensuring resource quota status is calculated | |
STEP: Creating a Secret | |
STEP: Ensuring resource quota status captures secret creation | |
STEP: Deleting a secret | |
STEP: Ensuring resource quota status released usage | |
[AfterEach] [sig-api-machinery] ResourceQuota | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 | |
Mar 6 03:15:37.791: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
STEP: Destroying namespace "resourcequota-827" for this suite. | |
• [SLOW TEST:17.182 seconds] | |
[sig-api-machinery] ResourceQuota | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 | |
should create a ResourceQuota and capture the life of a secret. [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
------------------------------ | |
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":278,"completed":93,"skipped":1723,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]} | |
SS | |
------------------------------ | |
[sig-storage] Projected downwardAPI | |
should provide container's memory request [NodeConformance] [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
[BeforeEach] [sig-storage] Projected downwardAPI | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 | |
STEP: Creating a kubernetes client | |
Mar 6 03:15:37.799: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 | |
STEP: Building a namespace api object, basename projected | |
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-7751 | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] [sig-storage] Projected downwardAPI | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 | |
[It] should provide container's memory request [NodeConformance] [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
STEP: Creating a pod to test downward API volume plugin | |
Mar 6 03:15:37.958: INFO: Waiting up to 5m0s for pod "downwardapi-volume-dd162cd5-aa8a-43ca-ae84-d0d57c7f7477" in namespace "projected-7751" to be "success or failure" | |
Mar 6 03:15:37.960: INFO: Pod "downwardapi-volume-dd162cd5-aa8a-43ca-ae84-d0d57c7f7477": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064581ms | |
Mar 6 03:15:39.962: INFO: Pod "downwardapi-volume-dd162cd5-aa8a-43ca-ae84-d0d57c7f7477": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.004226029s | |
STEP: Saw pod success | |
Mar 6 03:15:39.962: INFO: Pod "downwardapi-volume-dd162cd5-aa8a-43ca-ae84-d0d57c7f7477" satisfied condition "success or failure" | |
Mar 6 03:15:39.964: INFO: Trying to get logs from node worker02 pod downwardapi-volume-dd162cd5-aa8a-43ca-ae84-d0d57c7f7477 container client-container: <nil> | |
STEP: delete the pod | |
Mar 6 03:15:39.977: INFO: Waiting for pod downwardapi-volume-dd162cd5-aa8a-43ca-ae84-d0d57c7f7477 to disappear | |
Mar 6 03:15:39.979: INFO: Pod downwardapi-volume-dd162cd5-aa8a-43ca-ae84-d0d57c7f7477 no longer exists | |
[AfterEach] [sig-storage] Projected downwardAPI | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 | |
Mar 6 03:15:39.979: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
STEP: Destroying namespace "projected-7751" for this suite. | |
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":94,"skipped":1725,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]} | |
SSSSSSSSSSSSSSSSSSSSSSSS | |
------------------------------ | |
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] | |
works for CRD with validation schema [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 | |
STEP: Creating a kubernetes client | |
Mar 6 03:15:39.986: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 | |
STEP: Building a namespace api object, basename crd-publish-openapi | |
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in crd-publish-openapi-6822 | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] works for CRD with validation schema [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
Mar 6 03:15:40.124: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 | |
STEP: client-side validation (kubectl create and apply) allows request with known and required properties | |
Mar 6 03:15:51.411: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 --namespace=crd-publish-openapi-6822 create -f -' | |
Mar 6 03:16:01.621: INFO: stderr: "" | |
Mar 6 03:16:01.621: INFO: stdout: "e2e-test-crd-publish-openapi-5310-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" | |
Mar 6 03:16:01.621: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 --namespace=crd-publish-openapi-6822 delete e2e-test-crd-publish-openapi-5310-crds test-foo' | |
Mar 6 03:16:16.717: INFO: stderr: "" | |
Mar 6 03:16:16.717: INFO: stdout: "e2e-test-crd-publish-openapi-5310-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" | |
Mar 6 03:16:16.717: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 --namespace=crd-publish-openapi-6822 apply -f -' | |
Mar 6 03:16:21.957: INFO: stderr: "" | |
Mar 6 03:16:21.957: INFO: stdout: "e2e-test-crd-publish-openapi-5310-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" | |
Mar 6 03:16:21.957: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 --namespace=crd-publish-openapi-6822 delete e2e-test-crd-publish-openapi-5310-crds test-foo' | |
Mar 6 03:16:37.039: INFO: stderr: "" | |
Mar 6 03:16:37.039: INFO: stdout: "e2e-test-crd-publish-openapi-5310-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" | |
STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema | |
Mar 6 03:16:37.039: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 --namespace=crd-publish-openapi-6822 create -f -' | |
Mar 6 03:16:37.225: INFO: rc: 1 | |
Mar 6 03:16:37.225: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 --namespace=crd-publish-openapi-6822 apply -f -' | |
Mar 6 03:16:37.405: INFO: rc: 1 | |
STEP: client-side validation (kubectl create and apply) rejects request without required properties | |
Mar 6 03:16:37.405: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 --namespace=crd-publish-openapi-6822 create -f -' | |
Mar 6 03:16:37.593: INFO: rc: 1 | |
Mar 6 03:16:37.593: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 --namespace=crd-publish-openapi-6822 apply -f -' | |
Mar 6 03:16:37.784: INFO: rc: 1 | |
STEP: kubectl explain works to explain CR properties | |
Mar 6 03:16:37.784: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 explain e2e-test-crd-publish-openapi-5310-crds' | |
Mar 6 03:16:52.985: INFO: stderr: "" | |
Mar 6 03:16:52.985: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-5310-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t<string>\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t<string>\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t<Object>\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t<Object>\n Specification of Foo\n\n status\t<Object>\n Status of Foo\n\n" | |
STEP: kubectl explain works to explain CR properties recursively | |
Mar 6 03:16:52.985: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 explain e2e-test-crd-publish-openapi-5310-crds.metadata' | |
Mar 6 03:17:08.201: INFO: stderr: "" | |
Mar 6 03:17:08.201: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-5310-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata <Object>\n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t<map[string]string>\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t<string>\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t<string>\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC. Populated by the system.\n Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t<integer>\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t<string>\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested. Populated by the system when a graceful deletion is\n requested. Read-only. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t<string>\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server. If this field is specified and the generated name exists, the\n server will NOT return a 409 - instead, it will either return 201 Created\n or 500 with Reason ServerTimeout indicating a unique name could not be\n found in the time allotted, and the client should retry (optionally after\n the time indicated in the Retry-After header). Applied only if Name is not\n specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t<integer>\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t<map[string]string>\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t<string>\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t<string>\n Namespace defines the space within each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty. Must\n be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t<string>\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources. Populated by the system.\n Read-only. Value must be treated as opaque by clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t<string>\n SelfLink is a URL representing this object. Populated by the system.\n Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n release and the field is planned to be removed in 1.21 release.\n\n uid\t<string>\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations. Populated by the system. Read-only.\n More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" | |
Mar 6 03:17:08.201: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 explain e2e-test-crd-publish-openapi-5310-crds.spec' | |
Mar 6 03:17:23.358: INFO: stderr: "" | |
Mar 6 03:17:23.358: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-5310-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec <Object>\n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" | |
Mar 6 03:17:23.358: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 explain e2e-test-crd-publish-openapi-5310-crds.spec.bars' | |
Mar 6 03:17:38.515: INFO: stderr: "" | |
Mar 6 03:17:38.515: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-5310-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t<string>\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t<string> -required-\n Name of Bar.\n\n" | |
STEP: kubectl explain works to return error when explain is called on property that doesn't exist | |
Mar 6 03:17:38.516: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 explain e2e-test-crd-publish-openapi-5310-crds.spec.bars2' | |
Mar 6 03:17:53.726: INFO: rc: 1 | |
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 | |
Mar 6 03:18:21.532: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
STEP: Destroying namespace "crd-publish-openapi-6822" for this suite. | |
• [SLOW TEST:161.553 seconds] | |
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 | |
works for CRD with validation schema [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
------------------------------ | |
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":278,"completed":95,"skipped":1749,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]} | |
SSSSS | |
------------------------------ | |
[sig-storage] EmptyDir volumes | |
should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
[BeforeEach] [sig-storage] EmptyDir volumes | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 | |
STEP: Creating a kubernetes client | |
Mar 6 03:18:21.540: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 | |
STEP: Building a namespace api object, basename emptydir | |
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-9621 | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
STEP: Creating a pod to test emptydir 0644 on tmpfs | |
Mar 6 03:18:21.675: INFO: Waiting up to 5m0s for pod "pod-612a6d6f-43f0-4f98-a1e1-716a8aff25d8" in namespace "emptydir-9621" to be "success or failure" | |
Mar 6 03:18:21.677: INFO: Pod "pod-612a6d6f-43f0-4f98-a1e1-716a8aff25d8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.303795ms | |
Mar 6 03:18:23.680: INFO: Pod "pod-612a6d6f-43f0-4f98-a1e1-716a8aff25d8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.004765868s | |
STEP: Saw pod success | |
Mar 6 03:18:23.680: INFO: Pod "pod-612a6d6f-43f0-4f98-a1e1-716a8aff25d8" satisfied condition "success or failure" | |
Mar 6 03:18:23.682: INFO: Trying to get logs from node worker02 pod pod-612a6d6f-43f0-4f98-a1e1-716a8aff25d8 container test-container: <nil> | |
STEP: delete the pod | |
Mar 6 03:18:23.702: INFO: Waiting for pod pod-612a6d6f-43f0-4f98-a1e1-716a8aff25d8 to disappear | |
Mar 6 03:18:23.713: INFO: Pod pod-612a6d6f-43f0-4f98-a1e1-716a8aff25d8 no longer exists | |
[AfterEach] [sig-storage] EmptyDir volumes | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 | |
Mar 6 03:18:23.713: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
STEP: Destroying namespace "emptydir-9621" for this suite. | |
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":96,"skipped":1754,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]} | |
SSSSSSSSSSS | |
------------------------------ | |
[sig-storage] Projected downwardAPI | |
should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
[BeforeEach] [sig-storage] Projected downwardAPI | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 | |
STEP: Creating a kubernetes client | |
Mar 6 03:18:23.725: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 | |
STEP: Building a namespace api object, basename projected | |
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-7937 | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] [sig-storage] Projected downwardAPI | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 | |
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
STEP: Creating a pod to test downward API volume plugin | |
Mar 6 03:18:23.858: INFO: Waiting up to 5m0s for pod "downwardapi-volume-89dafbfc-5654-44d6-b062-ca4107ad419e" in namespace "projected-7937" to be "success or failure" | |
Mar 6 03:18:23.860: INFO: Pod "downwardapi-volume-89dafbfc-5654-44d6-b062-ca4107ad419e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.339379ms | |
Mar 6 03:18:25.862: INFO: Pod "downwardapi-volume-89dafbfc-5654-44d6-b062-ca4107ad419e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.004633342s | |
STEP: Saw pod success | |
Mar 6 03:18:25.862: INFO: Pod "downwardapi-volume-89dafbfc-5654-44d6-b062-ca4107ad419e" satisfied condition "success or failure" | |
Mar 6 03:18:25.866: INFO: Trying to get logs from node worker02 pod downwardapi-volume-89dafbfc-5654-44d6-b062-ca4107ad419e container client-container: <nil> | |
STEP: delete the pod | |
Mar 6 03:18:25.884: INFO: Waiting for pod downwardapi-volume-89dafbfc-5654-44d6-b062-ca4107ad419e to disappear | |
Mar 6 03:18:25.887: INFO: Pod downwardapi-volume-89dafbfc-5654-44d6-b062-ca4107ad419e no longer exists | |
[AfterEach] [sig-storage] Projected downwardAPI | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 | |
Mar 6 03:18:25.887: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
STEP: Destroying namespace "projected-7937" for this suite. | |
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":97,"skipped":1765,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]} | |
------------------------------ | |
[sig-storage] ConfigMap | |
should be consumable from pods in volume with mappings [NodeConformance] [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
[BeforeEach] [sig-storage] ConfigMap | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 | |
STEP: Creating a kubernetes client | |
Mar 6 03:18:25.910: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 | |
STEP: Building a namespace api object, basename configmap | |
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-8452 | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
STEP: Creating configMap with name configmap-test-volume-map-dfc42c2f-3cdc-41aa-8164-ac4005b2ec1b | |
STEP: Creating a pod to test consume configMaps | |
Mar 6 03:18:26.054: INFO: Waiting up to 5m0s for pod "pod-configmaps-a9e28c84-3dfe-4797-9435-e40b4a2ede75" in namespace "configmap-8452" to be "success or failure" | |
Mar 6 03:18:26.056: INFO: Pod "pod-configmaps-a9e28c84-3dfe-4797-9435-e40b4a2ede75": Phase="Pending", Reason="", readiness=false. Elapsed: 1.835541ms | |
Mar 6 03:18:28.058: INFO: Pod "pod-configmaps-a9e28c84-3dfe-4797-9435-e40b4a2ede75": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.004206937s | |
STEP: Saw pod success | |
Mar 6 03:18:28.058: INFO: Pod "pod-configmaps-a9e28c84-3dfe-4797-9435-e40b4a2ede75" satisfied condition "success or failure" | |
Mar 6 03:18:28.060: INFO: Trying to get logs from node worker02 pod pod-configmaps-a9e28c84-3dfe-4797-9435-e40b4a2ede75 container configmap-volume-test: <nil> | |
STEP: delete the pod | |
Mar 6 03:18:28.074: INFO: Waiting for pod pod-configmaps-a9e28c84-3dfe-4797-9435-e40b4a2ede75 to disappear | |
Mar 6 03:18:28.075: INFO: Pod pod-configmaps-a9e28c84-3dfe-4797-9435-e40b4a2ede75 no longer exists | |
[AfterEach] [sig-storage] ConfigMap | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 | |
Mar 6 03:18:28.076: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
STEP: Destroying namespace "configmap-8452" for this suite. | |
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":98,"skipped":1765,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]} | |
SSSS | |
------------------------------ | |
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] | |
works for multiple CRDs of same group but different versions [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 | |
STEP: Creating a kubernetes client | |
Mar 6 03:18:28.082: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 | |
STEP: Building a namespace api object, basename crd-publish-openapi | |
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in crd-publish-openapi-7745 | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] works for multiple CRDs of same group but different versions [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation | |
Mar 6 03:18:28.215: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 | |
STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation | |
Mar 6 03:19:03.392: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 | |
Mar 6 03:19:36.220: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 | |
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 | |
Mar 6 03:20:19.710: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
STEP: Destroying namespace "crd-publish-openapi-7745" for this suite. | |
• [SLOW TEST:111.638 seconds] | |
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 | |
works for multiple CRDs of same group but different versions [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
------------------------------ | |
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":278,"completed":99,"skipped":1769,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]} | |
SSSSSSSSSSSSSSSS | |
------------------------------ | |
[sig-storage] Secrets | |
should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
[BeforeEach] [sig-storage] Secrets | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 | |
STEP: Creating a kubernetes client | |
Mar 6 03:20:19.721: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 | |
STEP: Building a namespace api object, basename secrets | |
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secrets-180 | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secret-namespace-1050 | |
STEP: Creating secret with name secret-test-7f565658-ea02-4706-9f03-0d9924431fbc | |
STEP: Creating a pod to test consume secrets | |
Mar 6 03:20:19.996: INFO: Waiting up to 5m0s for pod "pod-secrets-6a017ce8-c0c9-4a88-bec3-ea8624502a5a" in namespace "secrets-180" to be "success or failure" | |
Mar 6 03:20:19.998: INFO: Pod "pod-secrets-6a017ce8-c0c9-4a88-bec3-ea8624502a5a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032907ms | |
Mar 6 03:20:22.001: INFO: Pod "pod-secrets-6a017ce8-c0c9-4a88-bec3-ea8624502a5a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.004349115s | |
STEP: Saw pod success | |
Mar 6 03:20:22.001: INFO: Pod "pod-secrets-6a017ce8-c0c9-4a88-bec3-ea8624502a5a" satisfied condition "success or failure" | |
Mar 6 03:20:22.002: INFO: Trying to get logs from node worker02 pod pod-secrets-6a017ce8-c0c9-4a88-bec3-ea8624502a5a container secret-volume-test: <nil> | |
STEP: delete the pod | |
Mar 6 03:20:22.026: INFO: Waiting for pod pod-secrets-6a017ce8-c0c9-4a88-bec3-ea8624502a5a to disappear | |
Mar 6 03:20:22.027: INFO: Pod pod-secrets-6a017ce8-c0c9-4a88-bec3-ea8624502a5a no longer exists | |
[AfterEach] [sig-storage] Secrets | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 | |
Mar 6 03:20:22.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
STEP: Destroying namespace "secrets-180" for this suite. | |
STEP: Destroying namespace "secret-namespace-1050" for this suite. | |
•{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":278,"completed":100,"skipped":1785,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]} | |
SSSSSSSSSSSSSSSSSSSSSSSS | |
------------------------------ | |
[sig-storage] EmptyDir volumes | |
should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
[BeforeEach] [sig-storage] EmptyDir volumes | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 | |
STEP: Creating a kubernetes client | |
Mar 6 03:20:22.041: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 | |
STEP: Building a namespace api object, basename emptydir | |
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-5721 | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
STEP: Creating a pod to test emptydir 0777 on node default medium | |
Mar 6 03:20:22.178: INFO: Waiting up to 5m0s for pod "pod-956618cb-990c-4edd-8a20-3ebf3433df4b" in namespace "emptydir-5721" to be "success or failure" | |
Mar 6 03:20:22.180: INFO: Pod "pod-956618cb-990c-4edd-8a20-3ebf3433df4b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.469775ms | |
Mar 6 03:20:24.185: INFO: Pod "pod-956618cb-990c-4edd-8a20-3ebf3433df4b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006845277s | |
STEP: Saw pod success | |
Mar 6 03:20:24.185: INFO: Pod "pod-956618cb-990c-4edd-8a20-3ebf3433df4b" satisfied condition "success or failure" | |
Mar 6 03:20:24.187: INFO: Trying to get logs from node worker02 pod pod-956618cb-990c-4edd-8a20-3ebf3433df4b container test-container: <nil> | |
STEP: delete the pod | |
Mar 6 03:20:24.233: INFO: Waiting for pod pod-956618cb-990c-4edd-8a20-3ebf3433df4b to disappear | |
Mar 6 03:20:24.236: INFO: Pod pod-956618cb-990c-4edd-8a20-3ebf3433df4b no longer exists | |
[AfterEach] [sig-storage] EmptyDir volumes | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 | |
Mar 6 03:20:24.236: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
STEP: Destroying namespace "emptydir-5721" for this suite. | |
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":101,"skipped":1809,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]} | |
SSSSSSSSSSSSSSSSSSSSS | |
------------------------------ | |
[sig-storage] Projected secret | |
should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
[BeforeEach] [sig-storage] Projected secret | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 | |
STEP: Creating a kubernetes client | |
Mar 6 03:20:24.245: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 | |
STEP: Building a namespace api object, basename projected | |
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-2328 | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
STEP: Creating secret with name projected-secret-test-b206ea0e-f179-451c-a73f-1084efcd0745 | |
STEP: Creating a pod to test consume secrets | |
Mar 6 03:20:24.387: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-4990dff1-e9d9-42d4-b394-02747ee9f7db" in namespace "projected-2328" to be "success or failure" | |
Mar 6 03:20:24.391: INFO: Pod "pod-projected-secrets-4990dff1-e9d9-42d4-b394-02747ee9f7db": Phase="Pending", Reason="", readiness=false. Elapsed: 3.622879ms | |
Mar 6 03:20:26.393: INFO: Pod "pod-projected-secrets-4990dff1-e9d9-42d4-b394-02747ee9f7db": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005886992s | |
STEP: Saw pod success | |
Mar 6 03:20:26.393: INFO: Pod "pod-projected-secrets-4990dff1-e9d9-42d4-b394-02747ee9f7db" satisfied condition "success or failure" | |
Mar 6 03:20:26.396: INFO: Trying to get logs from node worker02 pod pod-projected-secrets-4990dff1-e9d9-42d4-b394-02747ee9f7db container secret-volume-test: <nil> | |
STEP: delete the pod | |
Mar 6 03:20:26.414: INFO: Waiting for pod pod-projected-secrets-4990dff1-e9d9-42d4-b394-02747ee9f7db to disappear | |
Mar 6 03:20:26.433: INFO: Pod pod-projected-secrets-4990dff1-e9d9-42d4-b394-02747ee9f7db no longer exists | |
[AfterEach] [sig-storage] Projected secret | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 | |
Mar 6 03:20:26.433: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
STEP: Destroying namespace "projected-2328" for this suite. | |
•{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":102,"skipped":1830,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]} | |
SSSSSSSS | |
------------------------------ | |
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] | |
should be able to deny attaching pod [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 | |
STEP: Creating a kubernetes client | |
Mar 6 03:20:26.446: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 | |
STEP: Building a namespace api object, basename webhook | |
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-7545 | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 | |
STEP: Setting up server cert | |
STEP: Create role binding to let webhook read extension-apiserver-authentication | |
STEP: Deploying the webhook pod | |
STEP: Wait for the deployment to be ready | |
Mar 6 03:20:27.293: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set | |
Mar 6 03:20:29.299: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719061627, loc:(*time.Location)(0x7db7bc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719061627, loc:(*time.Location)(0x7db7bc0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719061627, loc:(*time.Location)(0x7db7bc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719061627, loc:(*time.Location)(0x7db7bc0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} | |
STEP: Deploying the webhook service | |
STEP: Verifying the service has paired with the endpoint | |
Mar 6 03:20:32.323: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 | |
[It] should be able to deny attaching pod [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
STEP: Registering the webhook via the AdmissionRegistration API | |
Mar 6 03:20:42.340: INFO: Waiting for webhook configuration to be ready... | |
Mar 6 03:20:52.449: INFO: Waiting for webhook configuration to be ready... | |
Mar 6 03:21:02.550: INFO: Waiting for webhook configuration to be ready... | |
Mar 6 03:21:12.650: INFO: Waiting for webhook configuration to be ready... | |
Mar 6 03:21:22.658: INFO: Waiting for webhook configuration to be ready... | |
Mar 6 03:21:22.658: FAIL: waiting for webhook configuration to be ready | |
Unexpected error: | |
<*errors.errorString | 0xc0000b3950>: { | |
s: "timed out waiting for the condition", | |
} | |
timed out waiting for the condition | |
occurred | |
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 | |
STEP: Collecting events from namespace "webhook-7545". | |
STEP: Found 6 events. | |
Mar 6 03:21:22.661: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for sample-webhook-deployment-5f65f8c764-p4tgc: {default-scheduler } Scheduled: Successfully assigned webhook-7545/sample-webhook-deployment-5f65f8c764-p4tgc to worker02 | |
Mar 6 03:21:22.661: INFO: At 2020-03-06 03:20:27 +0000 UTC - event for sample-webhook-deployment: {deployment-controller } ScalingReplicaSet: Scaled up replica set sample-webhook-deployment-5f65f8c764 to 1 | |
Mar 6 03:21:22.661: INFO: At 2020-03-06 03:20:27 +0000 UTC - event for sample-webhook-deployment-5f65f8c764: {replicaset-controller } SuccessfulCreate: Created pod: sample-webhook-deployment-5f65f8c764-p4tgc | |
Mar 6 03:21:22.661: INFO: At 2020-03-06 03:20:27 +0000 UTC - event for sample-webhook-deployment-5f65f8c764-p4tgc: {kubelet worker02} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.8" already present on machine | |
Mar 6 03:21:22.661: INFO: At 2020-03-06 03:20:28 +0000 UTC - event for sample-webhook-deployment-5f65f8c764-p4tgc: {kubelet worker02} Created: Created container sample-webhook | |
Mar 6 03:21:22.661: INFO: At 2020-03-06 03:20:28 +0000 UTC - event for sample-webhook-deployment-5f65f8c764-p4tgc: {kubelet worker02} Started: Started container sample-webhook | |
Mar 6 03:21:22.664: INFO: POD NODE PHASE GRACE CONDITIONS | |
Mar 6 03:21:22.664: INFO: sample-webhook-deployment-5f65f8c764-p4tgc worker02 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-06 03:20:27 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-06 03:20:29 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-06 03:20:29 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-06 03:20:27 +0000 UTC }] | |
Mar 6 03:21:22.664: INFO: | |
Mar 6 03:21:22.666: INFO: | |
Logging node info for node master01 | |
Mar 6 03:21:22.669: INFO: Node Info: &Node{ObjectMeta:{master01 /api/v1/nodes/master01 aeae8a5b-4e17-4702-bb02-bcfde6cdb12a 14604 0 2020-03-06 02:29:18 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master01 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"76:15:82:0d:8b:ab"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:192.168.1.247 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{42140479488 0} {<nil>} 41152812Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3823214592 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{37926431477 0} {<nil>} 37926431477 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3718356992 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-03-06 03:19:00 +0000 UTC,LastTransitionTime:2020-03-06 02:29:18 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-03-06 03:19:00 +0000 UTC,LastTransitionTime:2020-03-06 02:29:18 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-03-06 03:19:00 +0000 UTC,LastTransitionTime:2020-03-06 02:29:18 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-03-06 03:19:00 +0000 UTC,LastTransitionTime:2020-03-06 02:30:33 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:192.168.1.247,},NodeAddress{Type:Hostname,Address:master01,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:20200220105402131453637367482142,SystemUUID:195205FE-EE72-4794-8EAA-AC554EFDEC9B,BootID:6a3bf627-7476-4f52-84fa-f3eab6d26427,KernelVersion:3.10.0-1062.12.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.5,KubeletVersion:v1.17.3,KubeProxyVersion:v1.17.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[192.168.1.252/library/k8s-keepalived@sha256:3db0032ef2feef675710595681cf9463470af179cd324c6773e831b6649ef785 192.168.1.252/library/k8s-keepalived:1.3.5],SizeBytes:356553439,},ContainerImage{Names:[sonobuoy/systemd-logs@sha256:fadad24a66ddd544987c38811108a73d1a306dd3b5e3f090b207786f2825ffde sonobuoy/systemd-logs:v0.3],SizeBytes:297365055,},ContainerImage{Names:[192.168.1.252/library/etcd-amd64@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216 192.168.1.252/library/etcd-amd64:3.4.3-0],SizeBytes:288426917,},ContainerImage{Names:[192.168.1.252/library/node@sha256:3226b047a7034918a05c986347c5fb4d2cce6d0844f325851bfba586271ee617 192.168.1.252/library/node:v3.12.0],SizeBytes:257501722,},ContainerImage{Names:[192.168.1.252/library/cni@sha256:dc3bc525f1d3b794db1f2a7ceb7d8b84699d13e1431fbc117063f7e2075ff4b5 192.168.1.252/library/cni:v3.12.0],SizeBytes:206678344,},ContainerImage{Names:[192.168.1.252/library/kube-apiserver@sha256:4ee4113bcce32ae436b15364b09a2439dee15f6a19bbcf43a470d3dc879b0c62 192.168.1.252/library/kube-apiserver:v1.17.3],SizeBytes:170986003,},ContainerImage{Names:[192.168.1.252/library/envoy@sha256:b36ee021fc4d285de7861dbaee01e7437ce1d63814ead6ae3e4dfcad4a951b2e 192.168.1.252/library/envoy:v1.12.2],SizeBytes:170487454,},ContainerImage{Names:[192.168.1.252/library/kube-controller-manager@sha256:8f03391c0d22e3da8d22725178efe3c4338e1920504f7f4eb4a1c7f5f40c4c6e 192.168.1.252/library/kube-controller-manager:v1.17.3],SizeBytes:160918035,},ContainerImage{Names:[192.168.1.252/library/dashboard@sha256:4e0d39dae7e089b77fe2bbcef648f89905716db9c1f0884950bfd42d9f446c29 192.168.1.252/library/dashboard:v2.0.0-rc5],SizeBytes:126359420,},ContainerImage{Names:[192.168.1.252/library/kube-proxy@sha256:8dfd672298f8fbcd37731e0f2e45be87199cee6ee1f482ffec766693dcec0ec4 192.168.1.252/library/kube-proxy:v1.17.3],SizeBytes:115964919,},ContainerImage{Names:[192.168.1.252/library/pod2daemon-flexvol@sha256:2bf967507ad1adb749f3484b5d39e7d7b8700c4a0f836e8093dae5c57a585ccf 192.168.1.252/library/pod2daemon-flexvol:v3.12.0],SizeBytes:111122324,},ContainerImage{Names:[192.168.1.252/library/kube-scheduler@sha256:eea535b58da5b2fb66b32c61c6913d014cf85061a39527ad9bca2fa84b53dc1c 192.168.1.252/library/kube-scheduler:v1.17.3],SizeBytes:94435859,},ContainerImage{Names:[sonobuoy/sonobuoy@sha256:73f9cfe546ac6d0d5b94308293484c29c3b02fc341a975cfca97c80dd8728ed7 sonobuoy/sonobuoy:v0.17.2],SizeBytes:84339798,},ContainerImage{Names:[192.168.1.252/library/k8s-haproxy@sha256:f88bca67a2782e7bd88af168d5c11216c32104d12fe9240fac54a1d3196e3f9c 192.168.1.252/library/k8s-haproxy:2.0.0],SizeBytes:73550856,},ContainerImage{Names:[192.168.1.252/library/kube-controllers@sha256:edf14a5bcc663d2b0013b1830469626b7aa27206cbc7715ed83c042890ca5837 192.168.1.252/library/kube-controllers:v3.12.0],SizeBytes:56567983,},ContainerImage{Names:[192.168.1.252/library/typha@sha256:3baf9aef445a3224160748d6f560426eab798d6c65620020b2466e114bf6805f 192.168.1.252/library/typha:v3.12.0],SizeBytes:56034822,},ContainerImage{Names:[192.168.1.252/library/flannel@sha256:bd76b84c74ad70368a2341c2402841b75950df881388e43fc2aca000c546653a 192.168.1.252/library/flannel:v0.11.0-amd64],SizeBytes:52567296,},ContainerImage{Names:[192.168.1.252/library/ctl@sha256:128e4c95cf92a482496d591c43cad2a6a21fab1f0e8a8f13e8503f1324106dc8 192.168.1.252/library/ctl:v3.12.0],SizeBytes:47895826,},ContainerImage{Names:[192.168.1.252/library/coredns@sha256:608ac7ccba5ce41c6941fca13bc67059c1eef927fd968b554b790e21cc92543c 192.168.1.252/library/coredns:1.6.5],SizeBytes:41578211,},ContainerImage{Names:[192.168.1.252/library/metrics-scraper@sha256:e24a74b3b1cdc84d6285d507a12eb06907fd8c457b3e8ae9baa9418eca43efc4 192.168.1.252/library/metrics-scraper:v1.0.3],SizeBytes:40105664,},ContainerImage{Names:[192.168.1.252/library/metrics-server-amd64@sha256:c9c4e95068b51d6b33a9dccc61875df07dc650abbf4ac1a19d58b4628f89288b 192.168.1.252/library/metrics-server-amd64:v0.3.6],SizeBytes:39944451,},ContainerImage{Names:[192.168.1.252/library/contour@sha256:3e10c69dfeaa830b84a50e6b47ce90e0f5a1aa84daf77f7662313077fa9579cf 192.168.1.252/library/contour:v1.1.0],SizeBytes:35721216,},ContainerImage{Names:[192.168.1.252/library/kuard-amd64@sha256:bd17153e9a3319f401acc7a27759243f37d422c06cbbf01cb3e1f54bbbfe14f4 192.168.1.252/library/kuard-amd64:1],SizeBytes:19745911,},ContainerImage{Names:[192.168.1.252/library/pause@sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d 192.168.1.252/library/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} | |
Mar 6 03:21:22.669: INFO: | |
Logging kubelet events for node master01 | |
Mar 6 03:21:22.673: INFO: | |
Logging pods the kubelet thinks is on node master01 | |
Mar 6 03:21:22.683: INFO: kube-flannel-ds-amd64-6mbnb started at 2020-03-06 02:30:22 +0000 UTC (1+1 container statuses recorded) | |
Mar 6 03:21:22.683: INFO: Init container install-cni ready: true, restart count 0 | |
Mar 6 03:21:22.683: INFO: Container kube-flannel ready: true, restart count 0 | |
Mar 6 03:21:22.683: INFO: kube-proxy-4j8ft started at 2020-03-06 02:30:22 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 03:21:22.683: INFO: Container kube-proxy ready: true, restart count 0 | |
Mar 6 03:21:22.683: INFO: kube-apiserver-master01 started at 2020-03-06 02:30:22 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 03:21:22.683: INFO: Container kube-apiserver ready: true, restart count 0 | |
Mar 6 03:21:22.683: INFO: kube-controller-manager-master01 started at 2020-03-06 02:30:22 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 03:21:22.683: INFO: Container kube-controller-manager ready: true, restart count 1 | |
Mar 6 03:21:22.683: INFO: kube-scheduler-master01 started at 2020-03-06 02:30:22 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 03:21:22.683: INFO: Container kube-scheduler ready: true, restart count 1 | |
Mar 6 03:21:22.683: INFO: sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-drhpn started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded) | |
Mar 6 03:21:22.683: INFO: Container sonobuoy-worker ready: true, restart count 0 | |
Mar 6 03:21:22.683: INFO: Container systemd-logs ready: true, restart count 0 | |
W0306 03:21:22.686205 19 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. | |
Mar 6 03:21:22.704: INFO: | |
Latency metrics for node master01 | |
Mar 6 03:21:22.704: INFO: | |
Logging node info for node master02 | |
Mar 6 03:21:22.706: INFO: Node Info: &Node{ObjectMeta:{master02 /api/v1/nodes/master02 6a0ecb6f-ef31-4754-858b-3eba76999224 14587 0 2020-03-06 02:29:17 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master02 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"82:c1:38:99:3b:39"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:192.168.1.248 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{42140479488 0} {<nil>} 41152812Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3823214592 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{37926431477 0} {<nil>} 37926431477 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3718356992 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-03-06 03:18:57 +0000 UTC,LastTransitionTime:2020-03-06 02:29:16 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-03-06 03:18:57 +0000 UTC,LastTransitionTime:2020-03-06 02:29:16 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-03-06 03:18:57 +0000 UTC,LastTransitionTime:2020-03-06 02:29:16 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-03-06 03:18:57 +0000 UTC,LastTransitionTime:2020-03-06 02:30:10 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:192.168.1.248,},NodeAddress{Type:Hostname,Address:master02,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:20200220105402131453637367482142,SystemUUID:8B2C4639-6D22-4D0D-A03C-F6D7E328F9D5,BootID:efd7329f-ae31-4806-ba13-7fdd5fad57df,KernelVersion:3.10.0-1062.12.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.5,KubeletVersion:v1.17.3,KubeProxyVersion:v1.17.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[sonobuoy/systemd-logs@sha256:fadad24a66ddd544987c38811108a73d1a306dd3b5e3f090b207786f2825ffde sonobuoy/systemd-logs:v0.3],SizeBytes:297365055,},ContainerImage{Names:[192.168.1.252/library/etcd-amd64@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216 192.168.1.252/library/etcd-amd64:3.4.3-0],SizeBytes:288426917,},ContainerImage{Names:[192.168.1.252/library/kube-apiserver@sha256:4ee4113bcce32ae436b15364b09a2439dee15f6a19bbcf43a470d3dc879b0c62 192.168.1.252/library/kube-apiserver:v1.17.3],SizeBytes:170986003,},ContainerImage{Names:[192.168.1.252/library/kube-controller-manager@sha256:8f03391c0d22e3da8d22725178efe3c4338e1920504f7f4eb4a1c7f5f40c4c6e 192.168.1.252/library/kube-controller-manager:v1.17.3],SizeBytes:160918035,},ContainerImage{Names:[192.168.1.252/library/kube-proxy@sha256:8dfd672298f8fbcd37731e0f2e45be87199cee6ee1f482ffec766693dcec0ec4 192.168.1.252/library/kube-proxy:v1.17.3],SizeBytes:115964919,},ContainerImage{Names:[192.168.1.252/library/kube-scheduler@sha256:eea535b58da5b2fb66b32c61c6913d014cf85061a39527ad9bca2fa84b53dc1c 192.168.1.252/library/kube-scheduler:v1.17.3],SizeBytes:94435859,},ContainerImage{Names:[sonobuoy/sonobuoy@sha256:73f9cfe546ac6d0d5b94308293484c29c3b02fc341a975cfca97c80dd8728ed7 sonobuoy/sonobuoy:v0.17.2],SizeBytes:84339798,},ContainerImage{Names:[192.168.1.252/library/k8s-haproxy@sha256:f88bca67a2782e7bd88af168d5c11216c32104d12fe9240fac54a1d3196e3f9c 192.168.1.252/library/k8s-haproxy:2.0.0],SizeBytes:73550856,},ContainerImage{Names:[192.168.1.252/library/flannel@sha256:bd76b84c74ad70368a2341c2402841b75950df881388e43fc2aca000c546653a 192.168.1.252/library/flannel:v0.11.0-amd64],SizeBytes:52567296,},ContainerImage{Names:[192.168.1.252/library/coredns@sha256:608ac7ccba5ce41c6941fca13bc67059c1eef927fd968b554b790e21cc92543c 192.168.1.252/library/coredns:1.6.5],SizeBytes:41578211,},ContainerImage{Names:[192.168.1.252/library/pause@sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d 192.168.1.252/library/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} | |
Mar 6 03:21:22.706: INFO: | |
Logging kubelet events for node master02 | |
Mar 6 03:21:22.710: INFO: | |
Logging pods the kubelet thinks is on node master02 | |
Mar 6 03:21:22.724: INFO: kube-apiserver-master02 started at 2020-03-06 02:29:50 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 03:21:22.724: INFO: Container kube-apiserver ready: true, restart count 0 | |
Mar 6 03:21:22.724: INFO: kube-controller-manager-master02 started at 2020-03-06 02:29:50 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 03:21:22.724: INFO: Container kube-controller-manager ready: true, restart count 1 | |
Mar 6 03:21:22.724: INFO: kube-scheduler-master02 started at 2020-03-06 02:29:23 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 03:21:22.724: INFO: Container kube-scheduler ready: true, restart count 1 | |
Mar 6 03:21:22.724: INFO: kube-proxy-scdss started at 2020-03-06 02:30:00 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 03:21:22.724: INFO: Container kube-proxy ready: true, restart count 0 | |
Mar 6 03:21:22.724: INFO: kube-flannel-ds-amd64-vfl78 started at 2020-03-06 02:30:00 +0000 UTC (1+1 container statuses recorded) | |
Mar 6 03:21:22.724: INFO: Init container install-cni ready: true, restart count 0 | |
Mar 6 03:21:22.724: INFO: Container kube-flannel ready: true, restart count 0 | |
Mar 6 03:21:22.724: INFO: coredns-7795996659-phdkc started at 2020-03-06 02:30:13 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 03:21:22.724: INFO: Container coredns ready: true, restart count 0 | |
Mar 6 03:21:22.724: INFO: sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-2zmwm started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded) | |
Mar 6 03:21:22.724: INFO: Container sonobuoy-worker ready: true, restart count 0 | |
Mar 6 03:21:22.724: INFO: Container systemd-logs ready: true, restart count 0 | |
W0306 03:21:22.727174 19 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. | |
Mar 6 03:21:22.748: INFO: | |
Latency metrics for node master02 | |
Mar 6 03:21:22.748: INFO: | |
Logging node info for node master03 | |
Mar 6 03:21:22.756: INFO: Node Info: &Node{ObjectMeta:{master03 /api/v1/nodes/master03 c508ee4c-fe9d-4c73-a857-e57fba26fa86 14588 0 2020-03-06 02:29:17 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master03 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"4a:aa:08:ea:16:90"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:192.168.1.249 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{42140479488 0} {<nil>} 41152812Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3823226880 0} {<nil>} 3733620Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{37926431477 0} {<nil>} 37926431477 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3718369280 0} {<nil>} 3631220Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-03-06 03:18:57 +0000 UTC,LastTransitionTime:2020-03-06 02:29:17 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-03-06 03:18:57 +0000 UTC,LastTransitionTime:2020-03-06 02:29:17 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-03-06 03:18:57 +0000 UTC,LastTransitionTime:2020-03-06 02:29:17 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-03-06 03:18:57 +0000 UTC,LastTransitionTime:2020-03-06 02:30:10 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:192.168.1.249,},NodeAddress{Type:Hostname,Address:master03,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:20200220105402131453637367482142,SystemUUID:0C8F0A67-EB7E-42DE-9418-0973FE735A08,BootID:05b1fa23-e6be-4032-bc93-8800264dff91,KernelVersion:3.10.0-1062.12.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.5,KubeletVersion:v1.17.3,KubeProxyVersion:v1.17.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[sonobuoy/systemd-logs@sha256:fadad24a66ddd544987c38811108a73d1a306dd3b5e3f090b207786f2825ffde sonobuoy/systemd-logs:v0.3],SizeBytes:297365055,},ContainerImage{Names:[192.168.1.252/library/etcd-amd64@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216 192.168.1.252/library/etcd-amd64:3.4.3-0],SizeBytes:288426917,},ContainerImage{Names:[192.168.1.252/library/kube-apiserver@sha256:4ee4113bcce32ae436b15364b09a2439dee15f6a19bbcf43a470d3dc879b0c62 192.168.1.252/library/kube-apiserver:v1.17.3],SizeBytes:170986003,},ContainerImage{Names:[192.168.1.252/library/kube-controller-manager@sha256:8f03391c0d22e3da8d22725178efe3c4338e1920504f7f4eb4a1c7f5f40c4c6e 192.168.1.252/library/kube-controller-manager:v1.17.3],SizeBytes:160918035,},ContainerImage{Names:[192.168.1.252/library/dashboard@sha256:4e0d39dae7e089b77fe2bbcef648f89905716db9c1f0884950bfd42d9f446c29 192.168.1.252/library/dashboard:v2.0.0-rc5],SizeBytes:126359420,},ContainerImage{Names:[192.168.1.252/library/kube-proxy@sha256:8dfd672298f8fbcd37731e0f2e45be87199cee6ee1f482ffec766693dcec0ec4 192.168.1.252/library/kube-proxy:v1.17.3],SizeBytes:115964919,},ContainerImage{Names:[192.168.1.252/library/kube-scheduler@sha256:eea535b58da5b2fb66b32c61c6913d014cf85061a39527ad9bca2fa84b53dc1c 192.168.1.252/library/kube-scheduler:v1.17.3],SizeBytes:94435859,},ContainerImage{Names:[sonobuoy/sonobuoy@sha256:73f9cfe546ac6d0d5b94308293484c29c3b02fc341a975cfca97c80dd8728ed7 sonobuoy/sonobuoy:v0.17.2],SizeBytes:84339798,},ContainerImage{Names:[192.168.1.252/library/k8s-haproxy@sha256:f88bca67a2782e7bd88af168d5c11216c32104d12fe9240fac54a1d3196e3f9c 192.168.1.252/library/k8s-haproxy:2.0.0],SizeBytes:73550856,},ContainerImage{Names:[192.168.1.252/library/flannel@sha256:bd76b84c74ad70368a2341c2402841b75950df881388e43fc2aca000c546653a 192.168.1.252/library/flannel:v0.11.0-amd64],SizeBytes:52567296,},ContainerImage{Names:[192.168.1.252/library/coredns@sha256:608ac7ccba5ce41c6941fca13bc67059c1eef927fd968b554b790e21cc92543c 192.168.1.252/library/coredns:1.6.5],SizeBytes:41578211,},ContainerImage{Names:[192.168.1.252/library/metrics-scraper@sha256:e24a74b3b1cdc84d6285d507a12eb06907fd8c457b3e8ae9baa9418eca43efc4 192.168.1.252/library/metrics-scraper:v1.0.3],SizeBytes:40105664,},ContainerImage{Names:[192.168.1.252/library/pause@sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d 192.168.1.252/library/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} | |
Mar 6 03:21:22.757: INFO: | |
Logging kubelet events for node master03 | |
Mar 6 03:21:22.761: INFO: | |
Logging pods the kubelet thinks is on node master03 | |
Mar 6 03:21:22.772: INFO: kube-controller-manager-master03 started at 2020-03-06 02:29:50 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 03:21:22.772: INFO: Container kube-controller-manager ready: true, restart count 1 | |
Mar 6 03:21:22.772: INFO: kube-flannel-ds-amd64-hs69k started at 2020-03-06 02:30:00 +0000 UTC (1+1 container statuses recorded) | |
Mar 6 03:21:22.772: INFO: Init container install-cni ready: true, restart count 0 | |
Mar 6 03:21:22.772: INFO: Container kube-flannel ready: true, restart count 0 | |
Mar 6 03:21:22.772: INFO: dashboard-metrics-scraper-56568cb9d7-d57kl started at 2020-03-06 02:30:10 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 03:21:22.772: INFO: Container dashboard-metrics-scraper ready: true, restart count 0 | |
Mar 6 03:21:22.772: INFO: kube-apiserver-master03 started at 2020-03-06 02:29:24 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 03:21:22.772: INFO: Container kube-apiserver ready: true, restart count 0 | |
Mar 6 03:21:22.772: INFO: kube-scheduler-master03 started at 2020-03-06 02:29:38 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 03:21:22.772: INFO: Container kube-scheduler ready: true, restart count 1 | |
Mar 6 03:21:22.772: INFO: kube-proxy-stbnn started at 2020-03-06 02:30:00 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 03:21:22.772: INFO: Container kube-proxy ready: true, restart count 0 | |
Mar 6 03:21:22.772: INFO: kubernetes-dashboard-6647798d59-j2ms4 started at 2020-03-06 02:30:10 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 03:21:22.772: INFO: Container kubernetes-dashboard ready: true, restart count 0 | |
Mar 6 03:21:22.772: INFO: coredns-7795996659-cmq4d started at 2020-03-06 02:30:13 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 03:21:22.772: INFO: Container coredns ready: true, restart count 0 | |
Mar 6 03:21:22.772: INFO: sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-w5psq started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded) | |
Mar 6 03:21:22.772: INFO: Container sonobuoy-worker ready: true, restart count 0 | |
Mar 6 03:21:22.772: INFO: Container systemd-logs ready: true, restart count 0 | |
W0306 03:21:22.774437 19 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. | |
Mar 6 03:21:22.801: INFO: | |
Latency metrics for node master03 | |
Mar 6 03:21:22.801: INFO: | |
Logging node info for node worker01 | |
Mar 6 03:21:22.803: INFO: Node Info: &Node{ObjectMeta:{worker01 /api/v1/nodes/worker01 cf4203bb-1bfa-4b35-991f-935275b6bc46 14805 0 2020-03-06 02:30:30 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:worker01 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"5a:49:f5:5b:74:b3"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:192.168.1.250 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{42140479488 0} {<nil>} 41152812Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3823214592 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{37926431477 0} {<nil>} 37926431477 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3718356992 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-03-06 03:19:53 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-03-06 03:19:53 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-03-06 03:19:53 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-03-06 03:19:53 +0000 UTC,LastTransitionTime:2020-03-06 02:30:45 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:192.168.1.250,},NodeAddress{Type:Hostname,Address:worker01,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:20200220105402131453637367482142,SystemUUID:5CA364AA-0FF8-4B57-BA86-F28699575F0D,BootID:c85ad0c4-ebcf-4d01-97f0-a36c1cfc50be,KernelVersion:3.10.0-1062.12.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.5,KubeletVersion:v1.17.3,KubeProxyVersion:v1.17.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[sonobuoy/systemd-logs@sha256:fadad24a66ddd544987c38811108a73d1a306dd3b5e3f090b207786f2825ffde sonobuoy/systemd-logs:v0.3],SizeBytes:297365055,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[192.168.1.252/library/kube-proxy@sha256:8dfd672298f8fbcd37731e0f2e45be87199cee6ee1f482ffec766693dcec0ec4 192.168.1.252/library/kube-proxy:v1.17.3],SizeBytes:115964919,},ContainerImage{Names:[sonobuoy/sonobuoy@sha256:73f9cfe546ac6d0d5b94308293484c29c3b02fc341a975cfca97c80dd8728ed7 sonobuoy/sonobuoy:v0.17.2],SizeBytes:84339798,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5 gcr.io/kubernetes-e2e-test-images/agnhost:2.8],SizeBytes:52800335,},ContainerImage{Names:[192.168.1.252/library/flannel@sha256:bd76b84c74ad70368a2341c2402841b75950df881388e43fc2aca000c546653a 192.168.1.252/library/flannel:v0.11.0-amd64],SizeBytes:52567296,},ContainerImage{Names:[192.168.1.252/library/metrics-server-amd64@sha256:c9c4e95068b51d6b33a9dccc61875df07dc650abbf4ac1a19d58b4628f89288b 192.168.1.252/library/metrics-server-amd64:v0.3.6],SizeBytes:39944451,},ContainerImage{Names:[192.168.1.252/library/kuard-amd64@sha256:bd17153e9a3319f401acc7a27759243f37d422c06cbbf01cb3e1f54bbbfe14f4 192.168.1.252/library/kuard-amd64:1],SizeBytes:19745911,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[192.168.1.252/library/pause@sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea 192.168.1.252/library/pause:3.1 k8s.gcr.io/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} | |
Mar 6 03:21:22.803: INFO: | |
Logging kubelet events for node worker01 | |
Mar 6 03:21:22.807: INFO: | |
Logging pods the kubelet thinks is on node worker01 | |
Mar 6 03:21:22.819: INFO: contour-54748c65f5-jl5wz started at 2020-03-06 02:30:46 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 03:21:22.819: INFO: Container contour ready: false, restart count 0 | |
Mar 6 03:21:22.819: INFO: metrics-server-78799bf646-xrsnn started at 2020-03-06 02:30:51 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 03:21:22.819: INFO: Container metrics-server ready: true, restart count 0 | |
Mar 6 03:21:22.819: INFO: kube-proxy-kcb8f started at 2020-03-06 02:30:30 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 03:21:22.819: INFO: Container kube-proxy ready: true, restart count 0 | |
Mar 6 03:21:22.819: INFO: contour-certgen-82k46 started at 2020-03-06 02:30:46 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 03:21:22.819: INFO: Container contour ready: false, restart count 0 | |
Mar 6 03:21:22.819: INFO: contour-54748c65f5-gk5sz started at 2020-03-06 02:30:51 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 03:21:22.819: INFO: Container contour ready: false, restart count 0 | |
Mar 6 03:21:22.819: INFO: sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-2bz8g started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded) | |
Mar 6 03:21:22.819: INFO: Container sonobuoy-worker ready: true, restart count 0 | |
Mar 6 03:21:22.819: INFO: Container systemd-logs ready: true, restart count 0 | |
Mar 6 03:21:22.819: INFO: kube-flannel-ds-amd64-xxhz9 started at 2020-03-06 02:30:30 +0000 UTC (1+1 container statuses recorded) | |
Mar 6 03:21:22.819: INFO: Init container install-cni ready: true, restart count 0 | |
Mar 6 03:21:22.819: INFO: Container kube-flannel ready: true, restart count 1 | |
Mar 6 03:21:22.819: INFO: kuard-678c676f5d-vsn86 started at 2020-03-06 02:30:51 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 03:21:22.819: INFO: Container kuard ready: true, restart count 0 | |
Mar 6 03:21:22.819: INFO: envoy-lvmcb started at 2020-03-06 02:30:45 +0000 UTC (1+1 container statuses recorded) | |
Mar 6 03:21:22.819: INFO: Init container envoy-initconfig ready: false, restart count 0 | |
Mar 6 03:21:22.819: INFO: Container envoy ready: false, restart count 0 | |
Mar 6 03:21:22.819: INFO: kuard-678c676f5d-m29b6 started at 2020-03-06 02:30:49 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 03:21:22.819: INFO: Container kuard ready: true, restart count 0 | |
Mar 6 03:21:22.819: INFO: kuard-678c676f5d-tzsnn started at 2020-03-06 02:30:51 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 03:21:22.819: INFO: Container kuard ready: true, restart count 0 | |
W0306 03:21:22.821797 19 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. | |
Mar 6 03:21:22.838: INFO: | |
Latency metrics for node worker01 | |
Mar 6 03:21:22.838: INFO: | |
Logging node info for node worker02 | |
Mar 6 03:21:22.840: INFO: Node Info: &Node{ObjectMeta:{worker02 /api/v1/nodes/worker02 f0994ba1-7e4e-4cc8-b3c8-25d34b25d9ce 14565 0 2020-03-06 02:30:30 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:worker02 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"1a:75:0a:e8:cc:76"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:192.168.1.251 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{42140479488 0} {<nil>} 41152812Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3823214592 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{37926431477 0} {<nil>} 37926431477 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3718356992 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-03-06 03:18:52 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-03-06 03:18:52 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-03-06 03:18:52 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-03-06 03:18:52 +0000 UTC,LastTransitionTime:2020-03-06 02:30:55 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:192.168.1.251,},NodeAddress{Type:Hostname,Address:worker02,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:20200220105402131453637367482142,SystemUUID:EDBF7E33-228B-4233-93CF-7850B5A311E4,BootID:bd6a4f0f-5ddb-4585-83df-253b9292b617,KernelVersion:3.10.0-1062.12.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.5,KubeletVersion:v1.17.3,KubeProxyVersion:v1.17.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/google-containers/conformance@sha256:502434491cbc3fac5d9a606a879e554cf881b2ba5b688bed25f2c33d3ff1c777 gcr.io/google-containers/conformance:v1.17.3],SizeBytes:575831882,},ContainerImage{Names:[sonobuoy/systemd-logs@sha256:fadad24a66ddd544987c38811108a73d1a306dd3b5e3f090b207786f2825ffde sonobuoy/systemd-logs:v0.3],SizeBytes:297365055,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:195659796,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[192.168.1.252/library/kube-proxy@sha256:8dfd672298f8fbcd37731e0f2e45be87199cee6ee1f482ffec766693dcec0ec4 192.168.1.252/library/kube-proxy:v1.17.3],SizeBytes:115964919,},ContainerImage{Names:[sonobuoy/sonobuoy@sha256:73f9cfe546ac6d0d5b94308293484c29c3b02fc341a975cfca97c80dd8728ed7 sonobuoy/sonobuoy:v0.17.2],SizeBytes:84339798,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5 gcr.io/kubernetes-e2e-test-images/agnhost:2.8],SizeBytes:52800335,},ContainerImage{Names:[192.168.1.252/library/flannel@sha256:bd76b84c74ad70368a2341c2402841b75950df881388e43fc2aca000c546653a 192.168.1.252/library/flannel:v0.11.0-amd64],SizeBytes:52567296,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd gcr.io/kubernetes-e2e-test-images/dnsutils:1.1],SizeBytes:9349974,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0],SizeBytes:4732240,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0],SizeBytes:1450451,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[192.168.1.252/library/pause@sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea 192.168.1.252/library/pause:3.1 k8s.gcr.io/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} | |
Mar 6 03:21:22.841: INFO: | |
Logging kubelet events for node worker02 | |
Mar 6 03:21:22.844: INFO: | |
Logging pods the kubelet thinks is on node worker02 | |
Mar 6 03:21:22.848: INFO: envoy-wgz76 started at 2020-03-06 02:30:55 +0000 UTC (1+1 container statuses recorded) | |
Mar 6 03:21:22.848: INFO: Init container envoy-initconfig ready: false, restart count 0 | |
Mar 6 03:21:22.848: INFO: Container envoy ready: false, restart count 0 | |
Mar 6 03:21:22.848: INFO: sonobuoy-e2e-job-67137ff64ac145d3 started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded) | |
Mar 6 03:21:22.848: INFO: Container e2e ready: true, restart count 0 | |
Mar 6 03:21:22.848: INFO: Container sonobuoy-worker ready: true, restart count 0 | |
Mar 6 03:21:22.848: INFO: sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-bpjtd started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded) | |
Mar 6 03:21:22.848: INFO: Container sonobuoy-worker ready: true, restart count 0 | |
Mar 6 03:21:22.848: INFO: Container systemd-logs ready: true, restart count 0 | |
Mar 6 03:21:22.848: INFO: sample-webhook-deployment-5f65f8c764-p4tgc started at 2020-03-06 03:20:27 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 03:21:22.848: INFO: Container sample-webhook ready: true, restart count 0 | |
Mar 6 03:21:22.848: INFO: kube-flannel-ds-amd64-ztfzf started at 2020-03-06 02:30:30 +0000 UTC (1+1 container statuses recorded) | |
Mar 6 03:21:22.848: INFO: Init container install-cni ready: true, restart count 0 | |
Mar 6 03:21:22.848: INFO: Container kube-flannel ready: true, restart count 0 | |
Mar 6 03:21:22.848: INFO: sonobuoy started at 2020-03-06 02:38:02 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 03:21:22.848: INFO: Container kube-sonobuoy ready: true, restart count 0 | |
Mar 6 03:21:22.848: INFO: kube-proxy-5xxdb started at 2020-03-06 02:30:30 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 03:21:22.848: INFO: Container kube-proxy ready: true, restart count 1 | |
W0306 03:21:22.850683 19 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. | |
Mar 6 03:21:22.870: INFO: | |
Latency metrics for node worker02 | |
Mar 6 03:21:22.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
STEP: Destroying namespace "webhook-7545" for this suite. | |
STEP: Destroying namespace "webhook-7545-markers" for this suite. | |
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 | |
• Failure [56.489 seconds] | |
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 | |
should be able to deny attaching pod [Conformance] [It] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
Mar 6 03:21:22.658: waiting for webhook configuration to be ready | |
Unexpected error: | |
<*errors.errorString | 0xc0000b3950>: { | |
s: "timed out waiting for the condition", | |
} | |
timed out waiting for the condition | |
occurred | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:963 | |
------------------------------ | |
{"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":278,"completed":102,"skipped":1838,"failed":7,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]"]} | |
SSSSSSSSSSSSSSSSSSSSSSSSSSSS | |
------------------------------ | |
[sig-storage] Projected configMap | |
should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
[BeforeEach] [sig-storage] Projected configMap | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 | |
STEP: Creating a kubernetes client | |
Mar 6 03:21:22.935: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 | |
STEP: Building a namespace api object, basename projected | |
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-7644 | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
STEP: Creating configMap with name projected-configmap-test-volume-map-03efa1d3-f3fc-480d-87f5-f76a1ca41b67 | |
STEP: Creating a pod to test consume configMaps | |
Mar 6 03:21:23.094: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ce84443e-6d6e-439d-8fe4-87f9f54b89fa" in namespace "projected-7644" to be "success or failure" | |
Mar 6 03:21:23.096: INFO: Pod "pod-projected-configmaps-ce84443e-6d6e-439d-8fe4-87f9f54b89fa": Phase="Pending", Reason="", readiness=false. Elapsed: 1.996753ms | |
Mar 6 03:21:25.098: INFO: Pod "pod-projected-configmaps-ce84443e-6d6e-439d-8fe4-87f9f54b89fa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.004453226s | |
STEP: Saw pod success | |
Mar 6 03:21:25.098: INFO: Pod "pod-projected-configmaps-ce84443e-6d6e-439d-8fe4-87f9f54b89fa" satisfied condition "success or failure" | |
Mar 6 03:21:25.103: INFO: Trying to get logs from node worker02 pod pod-projected-configmaps-ce84443e-6d6e-439d-8fe4-87f9f54b89fa container projected-configmap-volume-test: <nil> | |
STEP: delete the pod | |
Mar 6 03:21:25.125: INFO: Waiting for pod pod-projected-configmaps-ce84443e-6d6e-439d-8fe4-87f9f54b89fa to disappear | |
Mar 6 03:21:25.127: INFO: Pod pod-projected-configmaps-ce84443e-6d6e-439d-8fe4-87f9f54b89fa no longer exists | |
[AfterEach] [sig-storage] Projected configMap | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 | |
Mar 6 03:21:25.127: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
STEP: Destroying namespace "projected-7644" for this suite. | |
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":103,"skipped":1866,"failed":7,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]"]} | |
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS | |
------------------------------ | |
[sig-node] ConfigMap | |
should fail to create ConfigMap with empty key [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
[BeforeEach] [sig-node] ConfigMap | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 | |
STEP: Creating a kubernetes client | |
Mar 6 03:21:25.136: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 | |
STEP: Building a namespace api object, basename configmap | |
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-5556 | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] should fail to create ConfigMap with empty key [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
STEP: Creating configMap that has name configmap-test-emptyKey-ea657bbb-a296-444c-b56e-9d02d26887a1 | |
[AfterEach] [sig-node] ConfigMap | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 | |
Mar 6 03:21:25.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
STEP: Destroying namespace "configmap-5556" for this suite. | |
•{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":278,"completed":104,"skipped":1919,"failed":7,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]"]} | |
SSSSSSSSSS | |
------------------------------ | |
[sig-network] DNS | |
should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
[BeforeEach] [sig-network] DNS | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 | |
STEP: Creating a kubernetes client | |
Mar 6 03:21:25.272: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 | |
STEP: Building a namespace api object, basename dns | |
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in dns-792 | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
STEP: Creating a test headless service | |
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-792 A)" && test -n "$$check" && echo OK > /results/[email protected];check="$$(dig +tcp +noall +answer +search dns-test-service.dns-792 A)" && test -n "$$check" && echo OK > /results/[email protected];check="$$(dig +notcp +noall +answer +search dns-test-service.dns-792.svc A)" && test -n "$$check" && echo OK > /results/[email protected];check="$$(dig +tcp +noall +answer +search dns-test-service.dns-792.svc A)" && test -n "$$check" && echo OK > /results/[email protected];check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-792.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-792.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-792.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-792.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-792.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-792.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-792.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-792.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-792.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 71.204.104.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.104.204.71_udp@PTR;check="$$(dig +tcp +noall +answer +search 71.204.104.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.104.204.71_tcp@PTR;sleep 1; done | |
STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-792 A)" && test -n "$$check" && echo OK > /results/[email protected];check="$$(dig +tcp +noall +answer +search dns-test-service.dns-792 A)" && test -n "$$check" && echo OK > /results/[email protected];check="$$(dig +notcp +noall +answer +search dns-test-service.dns-792.svc A)" && test -n "$$check" && echo OK > /results/[email protected];check="$$(dig +tcp +noall +answer +search dns-test-service.dns-792.svc A)" && test -n "$$check" && echo OK > /results/[email protected];check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-792.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-792.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-792.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-792.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-792.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-792.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-792.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-792.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-792.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 71.204.104.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.104.204.71_udp@PTR;check="$$(dig +tcp +noall +answer +search 71.204.104.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.104.204.71_tcp@PTR;sleep 1; done | |
STEP: creating a pod to probe DNS | |
STEP: submitting the pod to kubernetes | |
STEP: retrieving the pod | |
STEP: looking for the results for each expected name from probers | |
Mar 6 03:21:29.469: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad) | |
Mar 6 03:21:29.471: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad) | |
Mar 6 03:21:29.473: INFO: Unable to read [email protected] from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad) | |
Mar 6 03:21:29.476: INFO: Unable to read [email protected] from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad) | |
Mar 6 03:21:29.478: INFO: Unable to read [email protected] from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad) | |
Mar 6 03:21:29.480: INFO: Unable to read [email protected] from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad) | |
Mar 6 03:21:29.483: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-792.svc from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad) | |
Mar 6 03:21:29.485: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-792.svc from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad) | |
Mar 6 03:21:29.498: INFO: Unable to read jessie_udp@dns-test-service from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad) | |
Mar 6 03:21:29.500: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad) | |
Mar 6 03:21:29.504: INFO: Unable to read [email protected] from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad) | |
Mar 6 03:21:29.506: INFO: Unable to read [email protected] from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad) | |
Mar 6 03:21:29.509: INFO: Unable to read [email protected] from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad) | |
Mar 6 03:21:29.511: INFO: Unable to read [email protected] from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad) | |
Mar 6 03:21:29.518: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-792.svc from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad) | |
Mar 6 03:21:29.520: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-792.svc from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad) | |
Mar 6 03:21:29.532: INFO: Lookups using dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service [email protected] [email protected] [email protected] [email protected] wheezy_udp@_http._tcp.dns-test-service.dns-792.svc wheezy_tcp@_http._tcp.dns-test-service.dns-792.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service [email protected] [email protected] [email protected] [email protected] jessie_udp@_http._tcp.dns-test-service.dns-792.svc jessie_tcp@_http._tcp.dns-test-service.dns-792.svc] | |
Mar 6 03:21:34.536: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad) | |
Mar 6 03:21:34.538: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad) | |
Mar 6 03:21:34.540: INFO: Unable to read [email protected] from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad) | |
Mar 6 03:21:34.542: INFO: Unable to read [email protected] from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad) | |
Mar 6 03:21:34.545: INFO: Unable to read [email protected] from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad) | |
Mar 6 03:21:34.547: INFO: Unable to read [email protected] from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad) | |
Mar 6 03:21:34.549: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-792.svc from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad) | |
Mar 6 03:21:34.551: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-792.svc from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad) | |
Mar 6 03:21:34.566: INFO: Unable to read jessie_udp@dns-test-service from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad) | |
Mar 6 03:21:34.568: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad) | |
Mar 6 03:21:34.570: INFO: Unable to read [email protected] from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad) | |
Mar 6 03:21:34.572: INFO: Unable to read [email protected] from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad) | |
Mar 6 03:21:34.574: INFO: Unable to read [email protected] from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad) | |
Mar 6 03:21:34.576: INFO: Unable to read [email protected] from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad) | |
Mar 6 03:21:34.578: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-792.svc from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad) | |
Mar 6 03:21:34.580: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-792.svc from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad) | |
Mar 6 03:21:34.592: INFO: Lookups using dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service [email protected] [email protected] [email protected] [email protected] wheezy_udp@_http._tcp.dns-test-service.dns-792.svc wheezy_tcp@_http._tcp.dns-test-service.dns-792.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service [email protected] [email protected] [email protected] [email protected] jessie_udp@_http._tcp.dns-test-service.dns-792.svc jessie_tcp@_http._tcp.dns-test-service.dns-792.svc] | |
Mar 6 03:21:39.541: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad) | |
Mar 6 03:21:39.544: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad) | |
Mar 6 03:21:39.548: INFO: Unable to read [email protected] from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad) | |
Mar 6 03:21:39.551: INFO: Unable to read [email protected] from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad) | |
Mar 6 03:21:39.556: INFO: Unable to read [email protected] from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad) | |
Mar 6 03:21:39.563: INFO: Unable to read [email protected] from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad) | |
Mar 6 03:21:39.566: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-792.svc from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad) | |
Mar 6 03:21:39.568: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-792.svc from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad) | |
Mar 6 03:21:39.582: INFO: Unable to read jessie_udp@dns-test-service from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad) | |
Mar 6 03:21:39.584: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad) | |
Mar 6 03:21:39.586: INFO: Unable to read [email protected] from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad) | |
Mar 6 03:21:39.589: INFO: Unable to read [email protected] from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad) | |
Mar 6 03:21:39.590: INFO: Unable to read [email protected] from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad) | |
Mar 6 03:21:39.594: INFO: Unable to read [email protected] from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad) | |
Mar 6 03:21:39.596: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-792.svc from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad) | |
Mar 6 03:21:39.598: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-792.svc from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad) | |
Mar 6 03:21:39.613: INFO: Lookups using dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service [email protected] [email protected] [email protected] [email protected] wheezy_udp@_http._tcp.dns-test-service.dns-792.svc wheezy_tcp@_http._tcp.dns-test-service.dns-792.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service [email protected] [email protected] [email protected] [email protected] jessie_udp@_http._tcp.dns-test-service.dns-792.svc jessie_tcp@_http._tcp.dns-test-service.dns-792.svc] | |
Mar 6 03:21:44.535: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad) | |
Mar 6 03:21:44.538: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad) | |
Mar 6 03:21:44.540: INFO: Unable to read [email protected] from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad) | |
Mar 6 03:21:44.542: INFO: Unable to read [email protected] from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad) | |
Mar 6 03:21:44.545: INFO: Unable to read [email protected] from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad) | |
Mar 6 03:21:44.547: INFO: Unable to read [email protected] from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad) | |
Mar 6 03:21:44.549: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-792.svc from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad) | |
Mar 6 03:21:44.551: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-792.svc from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad) | |
Mar 6 03:21:44.564: INFO: Unable to read jessie_udp@dns-test-service from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad) | |
Mar 6 03:21:44.566: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad) | |
Mar 6 03:21:44.568: INFO: Unable to read [email protected] from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad) | |
Mar 6 03:21:44.572: INFO: Unable to read [email protected] from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad) | |
Mar 6 03:21:44.575: INFO: Unable to read [email protected] from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad) | |
Mar 6 03:21:44.578: INFO: Unable to read [email protected] from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad) | |
Mar 6 03:21:44.580: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-792.svc from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad) | |
Mar 6 03:21:44.582: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-792.svc from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad) | |
Mar 6 03:21:44.596: INFO: Lookups using dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service [email protected] [email protected] [email protected] [email protected] wheezy_udp@_http._tcp.dns-test-service.dns-792.svc wheezy_tcp@_http._tcp.dns-test-service.dns-792.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service [email protected] [email protected] [email protected] [email protected] jessie_udp@_http._tcp.dns-test-service.dns-792.svc jessie_tcp@_http._tcp.dns-test-service.dns-792.svc] | |
Mar 6 03:21:49.536: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad) | |
Mar 6 03:21:49.538: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad) | |
Mar 6 03:21:49.540: INFO: Unable to read [email protected] from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad) | |
Mar 6 03:21:49.544: INFO: Unable to read [email protected] from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad) | |
Mar 6 03:21:49.547: INFO: Unable to read [email protected] from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad) | |
Mar 6 03:21:49.549: INFO: Unable to read [email protected] from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad) | |
Mar 6 03:21:49.551: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-792.svc from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad) | |
Mar 6 03:21:49.556: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-792.svc from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad) | |
Mar 6 03:21:49.571: INFO: Unable to read jessie_udp@dns-test-service from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad) | |
Mar 6 03:21:49.574: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad) | |
Mar 6 03:21:49.576: INFO: Unable to read [email protected] from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad) | |
Mar 6 03:21:49.578: INFO: Unable to read [email protected] from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad) | |
Mar 6 03:21:49.580: INFO: Unable to read [email protected] from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad) | |
Mar 6 03:21:49.582: INFO: Unable to read [email protected] from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad) | |
Mar 6 03:21:49.584: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-792.svc from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad) | |
Mar 6 03:21:49.586: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-792.svc from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad) | |
Mar 6 03:21:49.598: INFO: Lookups using dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service [email protected] [email protected] [email protected] [email protected] wheezy_udp@_http._tcp.dns-test-service.dns-792.svc wheezy_tcp@_http._tcp.dns-test-service.dns-792.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service [email protected] [email protected] [email protected] [email protected] jessie_udp@_http._tcp.dns-test-service.dns-792.svc jessie_tcp@_http._tcp.dns-test-service.dns-792.svc] | |
Mar 6 03:21:54.537: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad) | |
Mar 6 03:21:54.540: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad) | |
Mar 6 03:21:54.542: INFO: Unable to read [email protected] from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad) | |
Mar 6 03:21:54.544: INFO: Unable to read [email protected] from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad) | |
Mar 6 03:21:54.546: INFO: Unable to read [email protected] from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad) | |
Mar 6 03:21:54.548: INFO: Unable to read [email protected] from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad) | |
Mar 6 03:21:54.550: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-792.svc from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad) | |
Mar 6 03:21:54.552: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-792.svc from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad) | |
Mar 6 03:21:54.567: INFO: Unable to read jessie_udp@dns-test-service from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad) | |
Mar 6 03:21:54.568: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad) | |
Mar 6 03:21:54.571: INFO: Unable to read [email protected] from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad) | |
Mar 6 03:21:54.573: INFO: Unable to read [email protected] from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad) | |
Mar 6 03:21:54.575: INFO: Unable to read [email protected] from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad) | |
Mar 6 03:21:54.577: INFO: Unable to read [email protected] from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad) | |
Mar 6 03:21:54.579: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-792.svc from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad) | |
Mar 6 03:21:54.581: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-792.svc from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad) | |
Mar 6 03:21:54.592: INFO: Lookups using dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service [email protected] [email protected] [email protected] [email protected] wheezy_udp@_http._tcp.dns-test-service.dns-792.svc wheezy_tcp@_http._tcp.dns-test-service.dns-792.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service [email protected] [email protected] [email protected] [email protected] jessie_udp@_http._tcp.dns-test-service.dns-792.svc jessie_tcp@_http._tcp.dns-test-service.dns-792.svc] | |
Mar 6 03:21:59.593: INFO: DNS probes using dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad succeeded | |
STEP: deleting the pod | |
STEP: deleting the test service | |
STEP: deleting the test headless service | |
[AfterEach] [sig-network] DNS | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 | |
Mar 6 03:21:59.666: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
STEP: Destroying namespace "dns-792" for this suite. | |
• [SLOW TEST:34.406 seconds] | |
[sig-network] DNS | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 | |
should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
------------------------------ | |
{"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":278,"completed":105,"skipped":1929,"failed":7,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]"]} | |
SSSSSSSSSSSSSSSSSSSS | |
------------------------------ | |
[sig-api-machinery] Namespaces [Serial] | |
should ensure that all services are removed when a namespace is deleted [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
[BeforeEach] [sig-api-machinery] Namespaces [Serial] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 | |
STEP: Creating a kubernetes client | |
Mar 6 03:21:59.678: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 | |
STEP: Building a namespace api object, basename namespaces | |
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in namespaces-2387 | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] should ensure that all services are removed when a namespace is deleted [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
STEP: Creating a test namespace | |
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nsdeletetest-895 | |
STEP: Waiting for a default service account to be provisioned in namespace | |
STEP: Creating a service in the namespace | |
STEP: Deleting the namespace | |
STEP: Waiting for the namespace to be removed. | |
STEP: Recreating the namespace | |
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nsdeletetest-3698 | |
STEP: Verifying there is no service in the namespace | |
[AfterEach] [sig-api-machinery] Namespaces [Serial] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 | |
Mar 6 03:22:42.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
STEP: Destroying namespace "namespaces-2387" for this suite. | |
STEP: Destroying namespace "nsdeletetest-895" for this suite. | |
Mar 6 03:22:42.126: INFO: Namespace nsdeletetest-895 was already deleted | |
STEP: Destroying namespace "nsdeletetest-3698" for this suite. | |
• [SLOW TEST:42.451 seconds] | |
[sig-api-machinery] Namespaces [Serial] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 | |
should ensure that all services are removed when a namespace is deleted [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
------------------------------ | |
{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":278,"completed":106,"skipped":1949,"failed":7,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]"]} | |
SSSSSSSSSS | |
------------------------------ | |
[sig-network] DNS | |
should provide DNS for the cluster [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
[BeforeEach] [sig-network] DNS | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 | |
STEP: Creating a kubernetes client | |
Mar 6 03:22:42.129: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 | |
STEP: Building a namespace api object, basename dns | |
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in dns-8752 | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] should provide DNS for the cluster [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/[email protected];check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/[email protected];podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8752.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done | |
STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/[email protected];check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/[email protected];podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8752.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done | |
STEP: creating a pod to probe DNS | |
STEP: submitting the pod to kubernetes | |
STEP: retrieving the pod | |
STEP: looking for the results for each expected name from probers | |
Mar 6 03:22:44.296: INFO: DNS probes using dns-8752/dns-test-9ce39c05-7091-4cb8-aaa7-dd28c12ad1e2 succeeded | |
STEP: deleting the pod | |
[AfterEach] [sig-network] DNS | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 | |
Mar 6 03:22:44.306: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
STEP: Destroying namespace "dns-8752" for this suite. | |
•{"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":278,"completed":107,"skipped":1959,"failed":7,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]"]} | |
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS | |
------------------------------ | |
[sig-node] Downward API | |
should provide pod UID as env vars [NodeConformance] [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
[BeforeEach] [sig-node] Downward API | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 | |
STEP: Creating a kubernetes client | |
Mar 6 03:22:44.321: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 | |
STEP: Building a namespace api object, basename downward-api | |
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-6558 | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] should provide pod UID as env vars [NodeConformance] [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
STEP: Creating a pod to test downward api env vars | |
Mar 6 03:22:44.457: INFO: Waiting up to 5m0s for pod "downward-api-34457227-1de7-4c21-ac4c-4cd91732adc0" in namespace "downward-api-6558" to be "success or failure" | |
Mar 6 03:22:44.459: INFO: Pod "downward-api-34457227-1de7-4c21-ac4c-4cd91732adc0": Phase="Pending", Reason="", readiness=false. Elapsed: 1.983049ms | |
Mar 6 03:22:46.461: INFO: Pod "downward-api-34457227-1de7-4c21-ac4c-4cd91732adc0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.004031712s | |
STEP: Saw pod success | |
Mar 6 03:22:46.461: INFO: Pod "downward-api-34457227-1de7-4c21-ac4c-4cd91732adc0" satisfied condition "success or failure" | |
Mar 6 03:22:46.463: INFO: Trying to get logs from node worker02 pod downward-api-34457227-1de7-4c21-ac4c-4cd91732adc0 container dapi-container: <nil> | |
STEP: delete the pod | |
Mar 6 03:22:46.475: INFO: Waiting for pod downward-api-34457227-1de7-4c21-ac4c-4cd91732adc0 to disappear | |
Mar 6 03:22:46.477: INFO: Pod downward-api-34457227-1de7-4c21-ac4c-4cd91732adc0 no longer exists | |
[AfterEach] [sig-node] Downward API | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 | |
Mar 6 03:22:46.477: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
STEP: Destroying namespace "downward-api-6558" for this suite. | |
•{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":278,"completed":108,"skipped":1988,"failed":7,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]"]} | |
------------------------------ | |
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] | |
should be able to convert a non homogeneous list of CRs [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 | |
STEP: Creating a kubernetes client | |
Mar 6 03:22:46.484: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 | |
STEP: Building a namespace api object, basename crd-webhook | |
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in crd-webhook-3107 | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125 | |
STEP: Setting up server cert | |
STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication | |
STEP: Deploying the custom resource conversion webhook pod | |
STEP: Wait for the deployment to be ready | |
Mar 6 03:22:46.842: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set | |
STEP: Deploying the webhook service | |
STEP: Verifying the service has paired with the endpoint | |
Mar 6 03:22:49.876: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 | |
[It] should be able to convert a non homogeneous list of CRs [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
Mar 6 03:22:49.879: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 | |
Mar 6 03:23:25.460: INFO: error waiting for conversion to succeed during setup: conversion webhook for stable.example.com/v2, Kind=E2e-test-crd-webhook-7161-crd failed: Post https://e2e-test-crd-conversion-webhook.crd-webhook-3107.svc:9443/crdconvert?timeout=30s: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) | |
Mar 6 03:23:55.564: INFO: error waiting for conversion to succeed during setup: conversion webhook for stable.example.com/v2, Kind=E2e-test-crd-webhook-7161-crd failed: Post https://e2e-test-crd-conversion-webhook.crd-webhook-3107.svc:9443/crdconvert?timeout=30s: context deadline exceeded (Client.Timeout exceeded while awaiting headers) | |
Mar 6 03:24:25.568: INFO: error waiting for conversion to succeed during setup: conversion webhook for stable.example.com/v2, Kind=E2e-test-crd-webhook-7161-crd failed: Post https://e2e-test-crd-conversion-webhook.crd-webhook-3107.svc:9443/crdconvert?timeout=30s: context deadline exceeded (Client.Timeout exceeded while awaiting headers) | |
Mar 6 03:24:25.568: FAIL: Unexpected error: | |
<*errors.errorString | 0xc0000b3950>: { | |
s: "timed out waiting for the condition", | |
} | |
timed out waiting for the condition | |
occurred | |
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 | |
STEP: Collecting events from namespace "crd-webhook-3107". | |
STEP: Found 6 events. | |
Mar 6 03:24:26.082: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for sample-crd-conversion-webhook-deployment-78dcf5dd84-sf2fd: {default-scheduler } Scheduled: Successfully assigned crd-webhook-3107/sample-crd-conversion-webhook-deployment-78dcf5dd84-sf2fd to worker02 | |
Mar 6 03:24:26.082: INFO: At 2020-03-06 03:22:46 +0000 UTC - event for sample-crd-conversion-webhook-deployment: {deployment-controller } ScalingReplicaSet: Scaled up replica set sample-crd-conversion-webhook-deployment-78dcf5dd84 to 1 | |
Mar 6 03:24:26.082: INFO: At 2020-03-06 03:22:46 +0000 UTC - event for sample-crd-conversion-webhook-deployment-78dcf5dd84: {replicaset-controller } SuccessfulCreate: Created pod: sample-crd-conversion-webhook-deployment-78dcf5dd84-sf2fd | |
Mar 6 03:24:26.082: INFO: At 2020-03-06 03:22:47 +0000 UTC - event for sample-crd-conversion-webhook-deployment-78dcf5dd84-sf2fd: {kubelet worker02} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.8" already present on machine | |
Mar 6 03:24:26.082: INFO: At 2020-03-06 03:22:47 +0000 UTC - event for sample-crd-conversion-webhook-deployment-78dcf5dd84-sf2fd: {kubelet worker02} Created: Created container sample-crd-conversion-webhook | |
Mar 6 03:24:26.082: INFO: At 2020-03-06 03:22:47 +0000 UTC - event for sample-crd-conversion-webhook-deployment-78dcf5dd84-sf2fd: {kubelet worker02} Started: Started container sample-crd-conversion-webhook | |
Mar 6 03:24:26.084: INFO: POD NODE PHASE GRACE CONDITIONS | |
Mar 6 03:24:26.084: INFO: sample-crd-conversion-webhook-deployment-78dcf5dd84-sf2fd worker02 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-06 03:22:46 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-06 03:22:48 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-06 03:22:48 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-06 03:22:46 +0000 UTC }] | |
Mar 6 03:24:26.084: INFO: | |
Mar 6 03:24:26.087: INFO: | |
Logging node info for node master01 | |
Mar 6 03:24:26.089: INFO: Node Info: &Node{ObjectMeta:{master01 /api/v1/nodes/master01 aeae8a5b-4e17-4702-bb02-bcfde6cdb12a 16254 0 2020-03-06 02:29:18 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master01 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"76:15:82:0d:8b:ab"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:192.168.1.247 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{42140479488 0} {<nil>} 41152812Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3823214592 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{37926431477 0} {<nil>} 37926431477 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3718356992 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-03-06 03:24:01 +0000 UTC,LastTransitionTime:2020-03-06 02:29:18 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-03-06 03:24:01 +0000 UTC,LastTransitionTime:2020-03-06 02:29:18 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-03-06 03:24:01 +0000 UTC,LastTransitionTime:2020-03-06 02:29:18 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-03-06 03:24:01 +0000 UTC,LastTransitionTime:2020-03-06 02:30:33 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:192.168.1.247,},NodeAddress{Type:Hostname,Address:master01,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:20200220105402131453637367482142,SystemUUID:195205FE-EE72-4794-8EAA-AC554EFDEC9B,BootID:6a3bf627-7476-4f52-84fa-f3eab6d26427,KernelVersion:3.10.0-1062.12.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.5,KubeletVersion:v1.17.3,KubeProxyVersion:v1.17.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[192.168.1.252/library/k8s-keepalived@sha256:3db0032ef2feef675710595681cf9463470af179cd324c6773e831b6649ef785 192.168.1.252/library/k8s-keepalived:1.3.5],SizeBytes:356553439,},ContainerImage{Names:[sonobuoy/systemd-logs@sha256:fadad24a66ddd544987c38811108a73d1a306dd3b5e3f090b207786f2825ffde sonobuoy/systemd-logs:v0.3],SizeBytes:297365055,},ContainerImage{Names:[192.168.1.252/library/etcd-amd64@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216 192.168.1.252/library/etcd-amd64:3.4.3-0],SizeBytes:288426917,},ContainerImage{Names:[192.168.1.252/library/node@sha256:3226b047a7034918a05c986347c5fb4d2cce6d0844f325851bfba586271ee617 192.168.1.252/library/node:v3.12.0],SizeBytes:257501722,},ContainerImage{Names:[192.168.1.252/library/cni@sha256:dc3bc525f1d3b794db1f2a7ceb7d8b84699d13e1431fbc117063f7e2075ff4b5 192.168.1.252/library/cni:v3.12.0],SizeBytes:206678344,},ContainerImage{Names:[192.168.1.252/library/kube-apiserver@sha256:4ee4113bcce32ae436b15364b09a2439dee15f6a19bbcf43a470d3dc879b0c62 192.168.1.252/library/kube-apiserver:v1.17.3],SizeBytes:170986003,},ContainerImage{Names:[192.168.1.252/library/envoy@sha256:b36ee021fc4d285de7861dbaee01e7437ce1d63814ead6ae3e4dfcad4a951b2e 192.168.1.252/library/envoy:v1.12.2],SizeBytes:170487454,},ContainerImage{Names:[192.168.1.252/library/kube-controller-manager@sha256:8f03391c0d22e3da8d22725178efe3c4338e1920504f7f4eb4a1c7f5f40c4c6e 192.168.1.252/library/kube-controller-manager:v1.17.3],SizeBytes:160918035,},ContainerImage{Names:[192.168.1.252/library/dashboard@sha256:4e0d39dae7e089b77fe2bbcef648f89905716db9c1f0884950bfd42d9f446c29 192.168.1.252/library/dashboard:v2.0.0-rc5],SizeBytes:126359420,},ContainerImage{Names:[192.168.1.252/library/kube-proxy@sha256:8dfd672298f8fbcd37731e0f2e45be87199cee6ee1f482ffec766693dcec0ec4 192.168.1.252/library/kube-proxy:v1.17.3],SizeBytes:115964919,},ContainerImage{Names:[192.168.1.252/library/pod2daemon-flexvol@sha256:2bf967507ad1adb749f3484b5d39e7d7b8700c4a0f836e8093dae5c57a585ccf 192.168.1.252/library/pod2daemon-flexvol:v3.12.0],SizeBytes:111122324,},ContainerImage{Names:[192.168.1.252/library/kube-scheduler@sha256:eea535b58da5b2fb66b32c61c6913d014cf85061a39527ad9bca2fa84b53dc1c 192.168.1.252/library/kube-scheduler:v1.17.3],SizeBytes:94435859,},ContainerImage{Names:[sonobuoy/sonobuoy@sha256:73f9cfe546ac6d0d5b94308293484c29c3b02fc341a975cfca97c80dd8728ed7 sonobuoy/sonobuoy:v0.17.2],SizeBytes:84339798,},ContainerImage{Names:[192.168.1.252/library/k8s-haproxy@sha256:f88bca67a2782e7bd88af168d5c11216c32104d12fe9240fac54a1d3196e3f9c 192.168.1.252/library/k8s-haproxy:2.0.0],SizeBytes:73550856,},ContainerImage{Names:[192.168.1.252/library/kube-controllers@sha256:edf14a5bcc663d2b0013b1830469626b7aa27206cbc7715ed83c042890ca5837 192.168.1.252/library/kube-controllers:v3.12.0],SizeBytes:56567983,},ContainerImage{Names:[192.168.1.252/library/typha@sha256:3baf9aef445a3224160748d6f560426eab798d6c65620020b2466e114bf6805f 192.168.1.252/library/typha:v3.12.0],SizeBytes:56034822,},ContainerImage{Names:[192.168.1.252/library/flannel@sha256:bd76b84c74ad70368a2341c2402841b75950df881388e43fc2aca000c546653a 192.168.1.252/library/flannel:v0.11.0-amd64],SizeBytes:52567296,},ContainerImage{Names:[192.168.1.252/library/ctl@sha256:128e4c95cf92a482496d591c43cad2a6a21fab1f0e8a8f13e8503f1324106dc8 192.168.1.252/library/ctl:v3.12.0],SizeBytes:47895826,},ContainerImage{Names:[192.168.1.252/library/coredns@sha256:608ac7ccba5ce41c6941fca13bc67059c1eef927fd968b554b790e21cc92543c 192.168.1.252/library/coredns:1.6.5],SizeBytes:41578211,},ContainerImage{Names:[192.168.1.252/library/metrics-scraper@sha256:e24a74b3b1cdc84d6285d507a12eb06907fd8c457b3e8ae9baa9418eca43efc4 192.168.1.252/library/metrics-scraper:v1.0.3],SizeBytes:40105664,},ContainerImage{Names:[192.168.1.252/library/metrics-server-amd64@sha256:c9c4e95068b51d6b33a9dccc61875df07dc650abbf4ac1a19d58b4628f89288b 192.168.1.252/library/metrics-server-amd64:v0.3.6],SizeBytes:39944451,},ContainerImage{Names:[192.168.1.252/library/contour@sha256:3e10c69dfeaa830b84a50e6b47ce90e0f5a1aa84daf77f7662313077fa9579cf 192.168.1.252/library/contour:v1.1.0],SizeBytes:35721216,},ContainerImage{Names:[192.168.1.252/library/kuard-amd64@sha256:bd17153e9a3319f401acc7a27759243f37d422c06cbbf01cb3e1f54bbbfe14f4 192.168.1.252/library/kuard-amd64:1],SizeBytes:19745911,},ContainerImage{Names:[192.168.1.252/library/pause@sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d 192.168.1.252/library/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} | |
Mar 6 03:24:26.089: INFO: | |
Logging kubelet events for node master01 | |
Mar 6 03:24:26.093: INFO: | |
Logging pods the kubelet thinks is on node master01 | |
Mar 6 03:24:26.102: INFO: sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-drhpn started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded) | |
Mar 6 03:24:26.102: INFO: Container sonobuoy-worker ready: true, restart count 0 | |
Mar 6 03:24:26.102: INFO: Container systemd-logs ready: true, restart count 0 | |
Mar 6 03:24:26.102: INFO: kube-flannel-ds-amd64-6mbnb started at 2020-03-06 02:30:22 +0000 UTC (1+1 container statuses recorded) | |
Mar 6 03:24:26.102: INFO: Init container install-cni ready: true, restart count 0 | |
Mar 6 03:24:26.102: INFO: Container kube-flannel ready: true, restart count 0 | |
Mar 6 03:24:26.102: INFO: kube-proxy-4j8ft started at 2020-03-06 02:30:22 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 03:24:26.102: INFO: Container kube-proxy ready: true, restart count 0 | |
Mar 6 03:24:26.102: INFO: kube-apiserver-master01 started at 2020-03-06 02:30:22 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 03:24:26.102: INFO: Container kube-apiserver ready: true, restart count 0 | |
Mar 6 03:24:26.102: INFO: kube-controller-manager-master01 started at 2020-03-06 02:30:22 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 03:24:26.102: INFO: Container kube-controller-manager ready: true, restart count 1 | |
Mar 6 03:24:26.102: INFO: kube-scheduler-master01 started at 2020-03-06 02:30:22 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 03:24:26.102: INFO: Container kube-scheduler ready: true, restart count 1 | |
W0306 03:24:26.105533 19 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. | |
Mar 6 03:24:26.125: INFO: | |
Latency metrics for node master01 | |
Mar 6 03:24:26.125: INFO: | |
Logging node info for node master02 | |
Mar 6 03:24:26.126: INFO: Node Info: &Node{ObjectMeta:{master02 /api/v1/nodes/master02 6a0ecb6f-ef31-4754-858b-3eba76999224 16243 0 2020-03-06 02:29:17 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master02 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"82:c1:38:99:3b:39"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:192.168.1.248 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{42140479488 0} {<nil>} 41152812Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3823214592 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{37926431477 0} {<nil>} 37926431477 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3718356992 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-03-06 03:23:58 +0000 UTC,LastTransitionTime:2020-03-06 02:29:16 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-03-06 03:23:58 +0000 UTC,LastTransitionTime:2020-03-06 02:29:16 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-03-06 03:23:58 +0000 UTC,LastTransitionTime:2020-03-06 02:29:16 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-03-06 03:23:58 +0000 UTC,LastTransitionTime:2020-03-06 02:30:10 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:192.168.1.248,},NodeAddress{Type:Hostname,Address:master02,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:20200220105402131453637367482142,SystemUUID:8B2C4639-6D22-4D0D-A03C-F6D7E328F9D5,BootID:efd7329f-ae31-4806-ba13-7fdd5fad57df,KernelVersion:3.10.0-1062.12.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.5,KubeletVersion:v1.17.3,KubeProxyVersion:v1.17.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[sonobuoy/systemd-logs@sha256:fadad24a66ddd544987c38811108a73d1a306dd3b5e3f090b207786f2825ffde sonobuoy/systemd-logs:v0.3],SizeBytes:297365055,},ContainerImage{Names:[192.168.1.252/library/etcd-amd64@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216 192.168.1.252/library/etcd-amd64:3.4.3-0],SizeBytes:288426917,},ContainerImage{Names:[192.168.1.252/library/kube-apiserver@sha256:4ee4113bcce32ae436b15364b09a2439dee15f6a19bbcf43a470d3dc879b0c62 192.168.1.252/library/kube-apiserver:v1.17.3],SizeBytes:170986003,},ContainerImage{Names:[192.168.1.252/library/kube-controller-manager@sha256:8f03391c0d22e3da8d22725178efe3c4338e1920504f7f4eb4a1c7f5f40c4c6e 192.168.1.252/library/kube-controller-manager:v1.17.3],SizeBytes:160918035,},ContainerImage{Names:[192.168.1.252/library/kube-proxy@sha256:8dfd672298f8fbcd37731e0f2e45be87199cee6ee1f482ffec766693dcec0ec4 192.168.1.252/library/kube-proxy:v1.17.3],SizeBytes:115964919,},ContainerImage{Names:[192.168.1.252/library/kube-scheduler@sha256:eea535b58da5b2fb66b32c61c6913d014cf85061a39527ad9bca2fa84b53dc1c 192.168.1.252/library/kube-scheduler:v1.17.3],SizeBytes:94435859,},ContainerImage{Names:[sonobuoy/sonobuoy@sha256:73f9cfe546ac6d0d5b94308293484c29c3b02fc341a975cfca97c80dd8728ed7 sonobuoy/sonobuoy:v0.17.2],SizeBytes:84339798,},ContainerImage{Names:[192.168.1.252/library/k8s-haproxy@sha256:f88bca67a2782e7bd88af168d5c11216c32104d12fe9240fac54a1d3196e3f9c 192.168.1.252/library/k8s-haproxy:2.0.0],SizeBytes:73550856,},ContainerImage{Names:[192.168.1.252/library/flannel@sha256:bd76b84c74ad70368a2341c2402841b75950df881388e43fc2aca000c546653a 192.168.1.252/library/flannel:v0.11.0-amd64],SizeBytes:52567296,},ContainerImage{Names:[192.168.1.252/library/coredns@sha256:608ac7ccba5ce41c6941fca13bc67059c1eef927fd968b554b790e21cc92543c 192.168.1.252/library/coredns:1.6.5],SizeBytes:41578211,},ContainerImage{Names:[192.168.1.252/library/pause@sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d 192.168.1.252/library/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} | |
Mar 6 03:24:26.127: INFO: | |
Logging kubelet events for node master02 | |
Mar 6 03:24:26.130: INFO: | |
Logging pods the kubelet thinks is on node master02 | |
Mar 6 03:24:26.142: INFO: kube-scheduler-master02 started at 2020-03-06 02:29:23 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 03:24:26.142: INFO: Container kube-scheduler ready: true, restart count 1 | |
Mar 6 03:24:26.142: INFO: kube-proxy-scdss started at 2020-03-06 02:30:00 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 03:24:26.142: INFO: Container kube-proxy ready: true, restart count 0 | |
Mar 6 03:24:26.142: INFO: kube-flannel-ds-amd64-vfl78 started at 2020-03-06 02:30:00 +0000 UTC (1+1 container statuses recorded) | |
Mar 6 03:24:26.142: INFO: Init container install-cni ready: true, restart count 0 | |
Mar 6 03:24:26.142: INFO: Container kube-flannel ready: true, restart count 0 | |
Mar 6 03:24:26.142: INFO: coredns-7795996659-phdkc started at 2020-03-06 02:30:13 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 03:24:26.142: INFO: Container coredns ready: true, restart count 0 | |
Mar 6 03:24:26.142: INFO: sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-2zmwm started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded) | |
Mar 6 03:24:26.142: INFO: Container sonobuoy-worker ready: true, restart count 0 | |
Mar 6 03:24:26.142: INFO: Container systemd-logs ready: true, restart count 0 | |
Mar 6 03:24:26.142: INFO: kube-apiserver-master02 started at 2020-03-06 02:29:50 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 03:24:26.142: INFO: Container kube-apiserver ready: true, restart count 0 | |
Mar 6 03:24:26.142: INFO: kube-controller-manager-master02 started at 2020-03-06 02:29:50 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 03:24:26.142: INFO: Container kube-controller-manager ready: true, restart count 1 | |
W0306 03:24:26.148940 19 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. | |
Mar 6 03:24:26.165: INFO: | |
Latency metrics for node master02 | |
Mar 6 03:24:26.165: INFO: | |
Logging node info for node master03 | |
Mar 6 03:24:26.167: INFO: Node Info: &Node{ObjectMeta:{master03 /api/v1/nodes/master03 c508ee4c-fe9d-4c73-a857-e57fba26fa86 16244 0 2020-03-06 02:29:17 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master03 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"4a:aa:08:ea:16:90"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:192.168.1.249 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{42140479488 0} {<nil>} 41152812Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3823226880 0} {<nil>} 3733620Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{37926431477 0} {<nil>} 37926431477 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3718369280 0} {<nil>} 3631220Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-03-06 03:23:58 +0000 UTC,LastTransitionTime:2020-03-06 02:29:17 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-03-06 03:23:58 +0000 UTC,LastTransitionTime:2020-03-06 02:29:17 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-03-06 03:23:58 +0000 UTC,LastTransitionTime:2020-03-06 02:29:17 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-03-06 03:23:58 +0000 UTC,LastTransitionTime:2020-03-06 02:30:10 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:192.168.1.249,},NodeAddress{Type:Hostname,Address:master03,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:20200220105402131453637367482142,SystemUUID:0C8F0A67-EB7E-42DE-9418-0973FE735A08,BootID:05b1fa23-e6be-4032-bc93-8800264dff91,KernelVersion:3.10.0-1062.12.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.5,KubeletVersion:v1.17.3,KubeProxyVersion:v1.17.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[sonobuoy/systemd-logs@sha256:fadad24a66ddd544987c38811108a73d1a306dd3b5e3f090b207786f2825ffde sonobuoy/systemd-logs:v0.3],SizeBytes:297365055,},ContainerImage{Names:[192.168.1.252/library/etcd-amd64@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216 192.168.1.252/library/etcd-amd64:3.4.3-0],SizeBytes:288426917,},ContainerImage{Names:[192.168.1.252/library/kube-apiserver@sha256:4ee4113bcce32ae436b15364b09a2439dee15f6a19bbcf43a470d3dc879b0c62 192.168.1.252/library/kube-apiserver:v1.17.3],SizeBytes:170986003,},ContainerImage{Names:[192.168.1.252/library/kube-controller-manager@sha256:8f03391c0d22e3da8d22725178efe3c4338e1920504f7f4eb4a1c7f5f40c4c6e 192.168.1.252/library/kube-controller-manager:v1.17.3],SizeBytes:160918035,},ContainerImage{Names:[192.168.1.252/library/dashboard@sha256:4e0d39dae7e089b77fe2bbcef648f89905716db9c1f0884950bfd42d9f446c29 192.168.1.252/library/dashboard:v2.0.0-rc5],SizeBytes:126359420,},ContainerImage{Names:[192.168.1.252/library/kube-proxy@sha256:8dfd672298f8fbcd37731e0f2e45be87199cee6ee1f482ffec766693dcec0ec4 192.168.1.252/library/kube-proxy:v1.17.3],SizeBytes:115964919,},ContainerImage{Names:[192.168.1.252/library/kube-scheduler@sha256:eea535b58da5b2fb66b32c61c6913d014cf85061a39527ad9bca2fa84b53dc1c 192.168.1.252/library/kube-scheduler:v1.17.3],SizeBytes:94435859,},ContainerImage{Names:[sonobuoy/sonobuoy@sha256:73f9cfe546ac6d0d5b94308293484c29c3b02fc341a975cfca97c80dd8728ed7 sonobuoy/sonobuoy:v0.17.2],SizeBytes:84339798,},ContainerImage{Names:[192.168.1.252/library/k8s-haproxy@sha256:f88bca67a2782e7bd88af168d5c11216c32104d12fe9240fac54a1d3196e3f9c 192.168.1.252/library/k8s-haproxy:2.0.0],SizeBytes:73550856,},ContainerImage{Names:[192.168.1.252/library/flannel@sha256:bd76b84c74ad70368a2341c2402841b75950df881388e43fc2aca000c546653a 192.168.1.252/library/flannel:v0.11.0-amd64],SizeBytes:52567296,},ContainerImage{Names:[192.168.1.252/library/coredns@sha256:608ac7ccba5ce41c6941fca13bc67059c1eef927fd968b554b790e21cc92543c 192.168.1.252/library/coredns:1.6.5],SizeBytes:41578211,},ContainerImage{Names:[192.168.1.252/library/metrics-scraper@sha256:e24a74b3b1cdc84d6285d507a12eb06907fd8c457b3e8ae9baa9418eca43efc4 192.168.1.252/library/metrics-scraper:v1.0.3],SizeBytes:40105664,},ContainerImage{Names:[192.168.1.252/library/pause@sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d 192.168.1.252/library/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} | |
Mar 6 03:24:26.167: INFO: | |
Logging kubelet events for node master03 | |
Mar 6 03:24:26.171: INFO: | |
Logging pods the kubelet thinks is on node master03 | |
Mar 6 03:24:26.187: INFO: kube-flannel-ds-amd64-hs69k started at 2020-03-06 02:30:00 +0000 UTC (1+1 container statuses recorded) | |
Mar 6 03:24:26.187: INFO: Init container install-cni ready: true, restart count 0 | |
Mar 6 03:24:26.187: INFO: Container kube-flannel ready: true, restart count 0 | |
Mar 6 03:24:26.187: INFO: dashboard-metrics-scraper-56568cb9d7-d57kl started at 2020-03-06 02:30:10 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 03:24:26.187: INFO: Container dashboard-metrics-scraper ready: true, restart count 0 | |
Mar 6 03:24:26.187: INFO: kube-controller-manager-master03 started at 2020-03-06 02:29:50 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 03:24:26.187: INFO: Container kube-controller-manager ready: true, restart count 1 | |
Mar 6 03:24:26.187: INFO: kube-scheduler-master03 started at 2020-03-06 02:29:38 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 03:24:26.187: INFO: Container kube-scheduler ready: true, restart count 1 | |
Mar 6 03:24:26.187: INFO: kube-proxy-stbnn started at 2020-03-06 02:30:00 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 03:24:26.187: INFO: Container kube-proxy ready: true, restart count 0 | |
Mar 6 03:24:26.187: INFO: kubernetes-dashboard-6647798d59-j2ms4 started at 2020-03-06 02:30:10 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 03:24:26.187: INFO: Container kubernetes-dashboard ready: true, restart count 0 | |
Mar 6 03:24:26.187: INFO: coredns-7795996659-cmq4d started at 2020-03-06 02:30:13 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 03:24:26.187: INFO: Container coredns ready: true, restart count 0 | |
Mar 6 03:24:26.187: INFO: sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-w5psq started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded) | |
Mar 6 03:24:26.187: INFO: Container sonobuoy-worker ready: true, restart count 0 | |
Mar 6 03:24:26.187: INFO: Container systemd-logs ready: true, restart count 0 | |
Mar 6 03:24:26.187: INFO: kube-apiserver-master03 started at 2020-03-06 02:29:24 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 03:24:26.187: INFO: Container kube-apiserver ready: true, restart count 0 | |
W0306 03:24:26.193419 19 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. | |
Mar 6 03:24:26.216: INFO: | |
Latency metrics for node master03 | |
Mar 6 03:24:26.216: INFO: | |
Logging node info for node worker01 | |
Mar 6 03:24:26.218: INFO: Node Info: &Node{ObjectMeta:{worker01 /api/v1/nodes/worker01 cf4203bb-1bfa-4b35-991f-935275b6bc46 14805 0 2020-03-06 02:30:30 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:worker01 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"5a:49:f5:5b:74:b3"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:192.168.1.250 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{42140479488 0} {<nil>} 41152812Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3823214592 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{37926431477 0} {<nil>} 37926431477 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3718356992 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-03-06 03:19:53 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-03-06 03:19:53 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-03-06 03:19:53 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-03-06 03:19:53 +0000 UTC,LastTransitionTime:2020-03-06 02:30:45 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:192.168.1.250,},NodeAddress{Type:Hostname,Address:worker01,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:20200220105402131453637367482142,SystemUUID:5CA364AA-0FF8-4B57-BA86-F28699575F0D,BootID:c85ad0c4-ebcf-4d01-97f0-a36c1cfc50be,KernelVersion:3.10.0-1062.12.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.5,KubeletVersion:v1.17.3,KubeProxyVersion:v1.17.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[sonobuoy/systemd-logs@sha256:fadad24a66ddd544987c38811108a73d1a306dd3b5e3f090b207786f2825ffde sonobuoy/systemd-logs:v0.3],SizeBytes:297365055,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[192.168.1.252/library/kube-proxy@sha256:8dfd672298f8fbcd37731e0f2e45be87199cee6ee1f482ffec766693dcec0ec4 192.168.1.252/library/kube-proxy:v1.17.3],SizeBytes:115964919,},ContainerImage{Names:[sonobuoy/sonobuoy@sha256:73f9cfe546ac6d0d5b94308293484c29c3b02fc341a975cfca97c80dd8728ed7 sonobuoy/sonobuoy:v0.17.2],SizeBytes:84339798,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5 gcr.io/kubernetes-e2e-test-images/agnhost:2.8],SizeBytes:52800335,},ContainerImage{Names:[192.168.1.252/library/flannel@sha256:bd76b84c74ad70368a2341c2402841b75950df881388e43fc2aca000c546653a 192.168.1.252/library/flannel:v0.11.0-amd64],SizeBytes:52567296,},ContainerImage{Names:[192.168.1.252/library/metrics-server-amd64@sha256:c9c4e95068b51d6b33a9dccc61875df07dc650abbf4ac1a19d58b4628f89288b 192.168.1.252/library/metrics-server-amd64:v0.3.6],SizeBytes:39944451,},ContainerImage{Names:[192.168.1.252/library/kuard-amd64@sha256:bd17153e9a3319f401acc7a27759243f37d422c06cbbf01cb3e1f54bbbfe14f4 192.168.1.252/library/kuard-amd64:1],SizeBytes:19745911,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[192.168.1.252/library/pause@sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea 192.168.1.252/library/pause:3.1 k8s.gcr.io/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} | |
Mar 6 03:24:26.218: INFO: | |
Logging kubelet events for node worker01 | |
Mar 6 03:24:26.223: INFO: | |
Logging pods the kubelet thinks is on node worker01 | |
Mar 6 03:24:26.233: INFO: sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-2bz8g started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded) | |
Mar 6 03:24:26.233: INFO: Container sonobuoy-worker ready: true, restart count 0 | |
Mar 6 03:24:26.233: INFO: Container systemd-logs ready: true, restart count 0 | |
Mar 6 03:24:26.233: INFO: kube-proxy-kcb8f started at 2020-03-06 02:30:30 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 03:24:26.233: INFO: Container kube-proxy ready: true, restart count 0 | |
Mar 6 03:24:26.233: INFO: contour-certgen-82k46 started at 2020-03-06 02:30:46 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 03:24:26.233: INFO: Container contour ready: false, restart count 0 | |
Mar 6 03:24:26.233: INFO: contour-54748c65f5-gk5sz started at 2020-03-06 02:30:51 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 03:24:26.233: INFO: Container contour ready: false, restart count 0 | |
Mar 6 03:24:26.233: INFO: kube-flannel-ds-amd64-xxhz9 started at 2020-03-06 02:30:30 +0000 UTC (1+1 container statuses recorded) | |
Mar 6 03:24:26.233: INFO: Init container install-cni ready: true, restart count 0 | |
Mar 6 03:24:26.233: INFO: Container kube-flannel ready: true, restart count 1 | |
Mar 6 03:24:26.233: INFO: kuard-678c676f5d-vsn86 started at 2020-03-06 02:30:51 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 03:24:26.233: INFO: Container kuard ready: true, restart count 0 | |
Mar 6 03:24:26.233: INFO: envoy-lvmcb started at 2020-03-06 02:30:45 +0000 UTC (1+1 container statuses recorded) | |
Mar 6 03:24:26.233: INFO: Init container envoy-initconfig ready: false, restart count 0 | |
Mar 6 03:24:26.233: INFO: Container envoy ready: false, restart count 0 | |
Mar 6 03:24:26.233: INFO: kuard-678c676f5d-m29b6 started at 2020-03-06 02:30:49 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 03:24:26.233: INFO: Container kuard ready: true, restart count 0 | |
Mar 6 03:24:26.233: INFO: kuard-678c676f5d-tzsnn started at 2020-03-06 02:30:51 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 03:24:26.233: INFO: Container kuard ready: true, restart count 0 | |
Mar 6 03:24:26.233: INFO: contour-54748c65f5-jl5wz started at 2020-03-06 02:30:46 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 03:24:26.233: INFO: Container contour ready: false, restart count 0 | |
Mar 6 03:24:26.233: INFO: metrics-server-78799bf646-xrsnn started at 2020-03-06 02:30:51 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 03:24:26.233: INFO: Container metrics-server ready: true, restart count 0 | |
W0306 03:24:26.236428 19 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. | |
Mar 6 03:24:26.255: INFO: | |
Latency metrics for node worker01 | |
Mar 6 03:24:26.255: INFO: | |
Logging node info for node worker02 | |
Mar 6 03:24:26.257: INFO: Node Info: &Node{ObjectMeta:{worker02 /api/v1/nodes/worker02 f0994ba1-7e4e-4cc8-b3c8-25d34b25d9ce 16224 0 2020-03-06 02:30:30 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:worker02 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"1a:75:0a:e8:cc:76"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:192.168.1.251 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{42140479488 0} {<nil>} 41152812Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3823214592 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{37926431477 0} {<nil>} 37926431477 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3718356992 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-03-06 03:23:52 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-03-06 03:23:52 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-03-06 03:23:52 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-03-06 03:23:52 +0000 UTC,LastTransitionTime:2020-03-06 02:30:55 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:192.168.1.251,},NodeAddress{Type:Hostname,Address:worker02,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:20200220105402131453637367482142,SystemUUID:EDBF7E33-228B-4233-93CF-7850B5A311E4,BootID:bd6a4f0f-5ddb-4585-83df-253b9292b617,KernelVersion:3.10.0-1062.12.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.5,KubeletVersion:v1.17.3,KubeProxyVersion:v1.17.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/google-containers/conformance@sha256:502434491cbc3fac5d9a606a879e554cf881b2ba5b688bed25f2c33d3ff1c777 gcr.io/google-containers/conformance:v1.17.3],SizeBytes:575831882,},ContainerImage{Names:[sonobuoy/systemd-logs@sha256:fadad24a66ddd544987c38811108a73d1a306dd3b5e3f090b207786f2825ffde sonobuoy/systemd-logs:v0.3],SizeBytes:297365055,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:195659796,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[192.168.1.252/library/kube-proxy@sha256:8dfd672298f8fbcd37731e0f2e45be87199cee6ee1f482ffec766693dcec0ec4 192.168.1.252/library/kube-proxy:v1.17.3],SizeBytes:115964919,},ContainerImage{Names:[sonobuoy/sonobuoy@sha256:73f9cfe546ac6d0d5b94308293484c29c3b02fc341a975cfca97c80dd8728ed7 sonobuoy/sonobuoy:v0.17.2],SizeBytes:84339798,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5 gcr.io/kubernetes-e2e-test-images/agnhost:2.8],SizeBytes:52800335,},ContainerImage{Names:[192.168.1.252/library/flannel@sha256:bd76b84c74ad70368a2341c2402841b75950df881388e43fc2aca000c546653a 192.168.1.252/library/flannel:v0.11.0-amd64],SizeBytes:52567296,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd gcr.io/kubernetes-e2e-test-images/dnsutils:1.1],SizeBytes:9349974,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0],SizeBytes:4732240,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0],SizeBytes:1450451,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[192.168.1.252/library/pause@sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea 192.168.1.252/library/pause:3.1 k8s.gcr.io/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} | |
Mar 6 03:24:26.258: INFO: | |
Logging kubelet events for node worker02 | |
Mar 6 03:24:26.261: INFO: | |
Logging pods the kubelet thinks is on node worker02 | |
Mar 6 03:24:26.271: INFO: kube-flannel-ds-amd64-ztfzf started at 2020-03-06 02:30:30 +0000 UTC (1+1 container statuses recorded) | |
Mar 6 03:24:26.271: INFO: Init container install-cni ready: true, restart count 0 | |
Mar 6 03:24:26.271: INFO: Container kube-flannel ready: true, restart count 0 | |
Mar 6 03:24:26.271: INFO: envoy-wgz76 started at 2020-03-06 02:30:55 +0000 UTC (1+1 container statuses recorded) | |
Mar 6 03:24:26.271: INFO: Init container envoy-initconfig ready: false, restart count 0 | |
Mar 6 03:24:26.271: INFO: Container envoy ready: false, restart count 0 | |
Mar 6 03:24:26.271: INFO: sonobuoy-e2e-job-67137ff64ac145d3 started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded) | |
Mar 6 03:24:26.271: INFO: Container e2e ready: true, restart count 0 | |
Mar 6 03:24:26.271: INFO: Container sonobuoy-worker ready: true, restart count 0 | |
Mar 6 03:24:26.271: INFO: sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-bpjtd started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded) | |
Mar 6 03:24:26.271: INFO: Container sonobuoy-worker ready: true, restart count 0 | |
Mar 6 03:24:26.271: INFO: Container systemd-logs ready: true, restart count 0 | |
Mar 6 03:24:26.271: INFO: sample-crd-conversion-webhook-deployment-78dcf5dd84-sf2fd started at 2020-03-06 03:22:46 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 03:24:26.271: INFO: Container sample-crd-conversion-webhook ready: true, restart count 0 | |
Mar 6 03:24:26.271: INFO: kube-proxy-5xxdb started at 2020-03-06 02:30:30 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 03:24:26.271: INFO: Container kube-proxy ready: true, restart count 1 | |
Mar 6 03:24:26.271: INFO: sonobuoy started at 2020-03-06 02:38:02 +0000 UTC (0+1 container statuses recorded) | |
Mar 6 03:24:26.271: INFO: Container kube-sonobuoy ready: true, restart count 0 | |
W0306 03:24:26.273848 19 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. | |
Mar 6 03:24:26.300: INFO: | |
Latency metrics for node worker02 | |
Mar 6 03:24:26.300: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
STEP: Destroying namespace "crd-webhook-3107" for this suite. | |
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136 | |
• Failure [99.904 seconds] | |
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 | |
should be able to convert a non homogeneous list of CRs [Conformance] [It] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
Mar 6 03:24:25.568: Unexpected error: | |
<*errors.errorString | 0xc0000b3950>: { | |
s: "timed out waiting for the condition", | |
} | |
timed out waiting for the condition | |
occurred | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:493 | |
------------------------------ | |
{"msg":"FAILED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":278,"completed":108,"skipped":1988,"failed":8,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]"]} | |
SSSSSSSSSSSSSSSSSSSS | |
------------------------------ | |
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] | |
should perform canary updates and phased rolling updates of template modifications [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
[BeforeEach] [sig-apps] StatefulSet | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 | |
STEP: Creating a kubernetes client | |
Mar 6 03:24:26.388: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 | |
STEP: Building a namespace api object, basename statefulset | |
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in statefulset-6523 | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] [sig-apps] StatefulSet | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 | |
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 | |
STEP: Creating service test in namespace statefulset-6523 | |
[It] should perform canary updates and phased rolling updates of template modifications [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
STEP: Creating a new StatefulSet | |
Mar 6 03:24:26.546: INFO: Found 0 stateful pods, waiting for 3 | |
Mar 6 03:24:36.548: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true | |
Mar 6 03:24:36.548: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true | |
Mar 6 03:24:36.548: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true | |
STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine | |
Mar 6 03:24:36.569: INFO: Updating stateful set ss2 | |
STEP: Creating a new revision | |
STEP: Not applying an update when the partition is greater than the number of replicas | |
STEP: Performing a canary update | |
Mar 6 03:24:46.594: INFO: Updating stateful set ss2 | |
Mar 6 03:24:46.602: INFO: Waiting for Pod statefulset-6523/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 | |
STEP: Restoring Pods to the correct revision when they are deleted | |
Mar 6 03:24:56.642: INFO: Found 2 stateful pods, waiting for 3 | |
Mar 6 03:25:06.645: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true | |
Mar 6 03:25:06.645: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true | |
Mar 6 03:25:06.645: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true | |
STEP: Performing a phased rolling update | |
Mar 6 03:25:06.663: INFO: Updating stateful set ss2 | |
Mar 6 03:25:06.667: INFO: Waiting for Pod statefulset-6523/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 | |
Mar 6 03:25:16.686: INFO: Updating stateful set ss2 | |
Mar 6 03:25:16.691: INFO: Waiting for StatefulSet statefulset-6523/ss2 to complete update | |
Mar 6 03:25:16.691: INFO: Waiting for Pod statefulset-6523/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 | |
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 | |
Mar 6 03:25:26.695: INFO: Deleting all statefulset in ns statefulset-6523 | |
Mar 6 03:25:26.698: INFO: Scaling statefulset ss2 to 0 | |
Mar 6 03:25:56.713: INFO: Waiting for statefulset status.replicas updated to 0 | |
Mar 6 03:25:56.718: INFO: Deleting statefulset ss2 | |
[AfterEach] [sig-apps] StatefulSet | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 | |
Mar 6 03:25:56.728: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
STEP: Destroying namespace "statefulset-6523" for this suite. | |
• [SLOW TEST:90.347 seconds] | |
[sig-apps] StatefulSet | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 | |
[k8s.io] Basic StatefulSet functionality [StatefulSetBasic] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 | |
should perform canary updates and phased rolling updates of template modifications [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
------------------------------ | |
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":278,"completed":109,"skipped":2008,"failed":8,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]"]} | |
SSSSSS | |
------------------------------ | |
[k8s.io] Kubelet when scheduling a busybox command in a pod | |
should print the output to logs [NodeConformance] [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
[BeforeEach] [k8s.io] Kubelet | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 | |
STEP: Creating a kubernetes client | |
Mar 6 03:25:56.735: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 | |
STEP: Building a namespace api object, basename kubelet-test | |
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubelet-test-2668 | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] [k8s.io] Kubelet | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 | |
[It] should print the output to logs [NodeConformance] [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
[AfterEach] [k8s.io] Kubelet | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 | |
Mar 6 03:25:58.886: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
STEP: Destroying namespace "kubelet-test-2668" for this suite. | |
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":278,"completed":110,"skipped":2014,"failed":8,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]"]} | |
SSSSSSSSSSSSSSSSSSSS | |
------------------------------ | |
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook | |
should execute prestop http hook properly [NodeConformance] [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
[BeforeEach] [k8s.io] Container Lifecycle Hook | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 | |
STEP: Creating a kubernetes client | |
Mar 6 03:25:58.894: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 | |
STEP: Building a namespace api object, basename container-lifecycle-hook | |
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-lifecycle-hook-2216 | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] when create a pod with lifecycle hook | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 | |
STEP: create the container to handle the HTTPGet hook request. | |
[It] should execute prestop http hook properly [NodeConformance] [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
STEP: create the pod with lifecycle hook | |
STEP: delete the pod with lifecycle hook | |
Mar 6 03:26:03.060: INFO: Waiting for pod pod-with-prestop-http-hook to disappear | |
Mar 6 03:26:03.062: INFO: Pod pod-with-prestop-http-hook still exists | |
Mar 6 03:26:05.063: INFO: Waiting for pod pod-with-prestop-http-hook to disappear | |
Mar 6 03:26:05.065: INFO: Pod pod-with-prestop-http-hook still exists | |
Mar 6 03:26:07.063: INFO: Waiting for pod pod-with-prestop-http-hook to disappear | |
Mar 6 03:26:07.066: INFO: Pod pod-with-prestop-http-hook still exists | |
Mar 6 03:26:09.063: INFO: Waiting for pod pod-with-prestop-http-hook to disappear | |
Mar 6 03:26:09.065: INFO: Pod pod-with-prestop-http-hook still exists | |
Mar 6 03:26:11.063: INFO: Waiting for pod pod-with-prestop-http-hook to disappear | |
Mar 6 03:26:11.066: INFO: Pod pod-with-prestop-http-hook still exists | |
Mar 6 03:26:13.063: INFO: Waiting for pod pod-with-prestop-http-hook to disappear | |
Mar 6 03:26:13.065: INFO: Pod pod-with-prestop-http-hook still exists | |
Mar 6 03:26:15.063: INFO: Waiting for pod pod-with-prestop-http-hook to disappear | |
Mar 6 03:26:15.065: INFO: Pod pod-with-prestop-http-hook still exists | |
Mar 6 03:26:17.063: INFO: Waiting for pod pod-with-prestop-http-hook to disappear | |
Mar 6 03:26:17.065: INFO: Pod pod-with-prestop-http-hook no longer exists | |
STEP: check prestop hook | |
[AfterEach] [k8s.io] Container Lifecycle Hook | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 | |
Mar 6 03:26:17.076: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
STEP: Destroying namespace "container-lifecycle-hook-2216" for this suite. | |
• [SLOW TEST:18.188 seconds] | |
[k8s.io] Container Lifecycle Hook | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 | |
when create a pod with lifecycle hook | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 | |
should execute prestop http hook properly [NodeConformance] [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
------------------------------ | |
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":278,"completed":111,"skipped":2034,"failed":8,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]"]} | |
[sig-storage] Secrets | |
should be consumable from pods in volume [NodeConformance] [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
[BeforeEach] [sig-storage] Secrets | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 | |
STEP: Creating a kubernetes client | |
Mar 6 03:26:17.082: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 | |
STEP: Building a namespace api object, basename secrets | |
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secrets-9945 | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] should be consumable from pods in volume [NodeConformance] [Conformance] | |
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 | |
STEP: Creating secret with name secret-test-ed365452-1e34-41bc-90bc-fd677e134e14 | |
STEP: Creating a pod to test consume secrets | |
Mar 6 03:26:17.223: INFO: Waiting up to 5m0s for pod "pod-secrets-40ea5a82-96ab-446a-8038-d545c78700a3" in namespace "secrets-9945" to be "success or failure" | |
Mar 6 03:26:17.225: INFO: Pod "pod-secrets-40ea5a82-96ab-446a-8038-d545c78700a3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.196762ms | |
Mar 6 03:26:19.228: INFO: Pod "pod-secrets-40ea5a82-96ab-446a-8038-d545c78700a3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.004860122s | |
STEP: Saw pod success | |
Mar 6 03:26:19.228: INFO: Pod "pod-secrets-40ea5a82-96ab-446a-8038-d545c78700a3" satisfied condition "success or failure" | |
Mar 6 03:26:19.230: INFO: Trying to get logs from node worker02 pod pod-secrets-40ea5a82-96ab-446a-8038-d545c78700a3 container secret-volume-test: <nil> | |
STEP: delete the pod | |
Mar 6 03:26:19.243: INFO: Waiting for pod pod-secrets-40ea5a82-96ab-446a-8038-d545c78700a3 to disappear | |
Mar 6 03:26:19.245: INFO: Pod pod-secrets-40ea5a82-96ab-446a-8 |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment