-
-
Save sacreman/b61266d2ec52cf3a1af7c278d9d93450 to your computer and use it in GitHub Desktop.
| # Prometheus configuration to scrape Kubernetes outside the cluster | |
| # Change master_ip and api_password to match your master server address and admin password | |
| global: | |
| scrape_interval: 15s | |
| evaluation_interval: 15s | |
| scrape_configs: | |
| # metrics for the prometheus server | |
| - job_name: 'prometheus' | |
| static_configs: | |
| - targets: ['localhost:9090'] | |
| # metrics for default/kubernetes api's from the kubernetes master | |
| - job_name: 'kubernetes-apiservers' | |
| kubernetes_sd_configs: | |
| - role: endpoints | |
| api_server: https://master_ip | |
| tls_config: | |
| insecure_skip_verify: true | |
| basic_auth: | |
| username: admin | |
| password: api_password | |
| scheme: https | |
| tls_config: | |
| insecure_skip_verify: true | |
| basic_auth: | |
| username: admin | |
| password: api_password | |
| relabel_configs: | |
| - source_labels: [__meta_kubernetes_namespace, __meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name] | |
| action: keep | |
| regex: default;kubernetes;https | |
| # metrics for the kubernetes node kubelet service (collection proxied through master) | |
| - job_name: 'kubernetes-nodes' | |
| kubernetes_sd_configs: | |
| - role: node | |
| api_server: https://master_ip | |
| tls_config: | |
| insecure_skip_verify: true | |
| basic_auth: | |
| username: admin | |
| password: api_password | |
| scheme: https | |
| tls_config: | |
| insecure_skip_verify: true | |
| basic_auth: | |
| username: admin | |
| password: api_password | |
| relabel_configs: | |
| - action: labelmap | |
| regex: __meta_kubernetes_node_label_(.+) | |
| - target_label: __address__ | |
| replacement: master_ip:443 | |
| - source_labels: [__meta_kubernetes_node_name] | |
| regex: (.+) | |
| target_label: __metrics_path__ | |
| replacement: /api/v1/nodes/${1}/proxy/metrics | |
| # metrics from service endpoints on /metrics over https via the master proxy | |
| # set annotation (prometheus.io/scrape: true) to enable | |
| # Example: kubectl annotate svc myservice prometheus.io/scrape=true | |
| - job_name: 'kubernetes-service-endpoints' | |
| kubernetes_sd_configs: | |
| - role: endpoints | |
| api_server: https://master_ip | |
| tls_config: | |
| insecure_skip_verify: true | |
| basic_auth: | |
| username: admin | |
| password: api_password | |
| scheme: https | |
| tls_config: | |
| insecure_skip_verify: true | |
| basic_auth: | |
| username: admin | |
| password: api_password | |
| relabel_configs: | |
| - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape] | |
| action: keep | |
| regex: true | |
| - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_port] | |
| action: replace | |
| regex: (\d+) | |
| target_label: __meta_kubernetes_pod_container_port_number | |
| - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path] | |
| action: replace | |
| regex: () | |
| target_label: __meta_kubernetes_service_annotation_prometheus_io_path | |
| replacement: /metrics | |
| - source_labels: [__meta_kubernetes_namespace, __meta_kubernetes_service_name, __meta_kubernetes_pod_container_port_number, __meta_kubernetes_service_annotation_prometheus_io_path] | |
| target_label: __metrics_path__ | |
| regex: (.+);(.+);(.+);(.+) | |
| replacement: /api/v1/namespaces/$1/services/$2:$3/proxy$4 | |
| - target_label: __address__ | |
| replacement: master_ip:443 | |
| - action: labelmap | |
| regex: __meta_kubernetes_service_label_(.+) | |
| - source_labels: [__meta_kubernetes_namespace] | |
| action: replace | |
| target_label: kubernetes_namespace | |
| - source_labels: [__meta_kubernetes_service_name] | |
| action: replace | |
| target_label: kubernetes_name | |
| - source_labels: [__meta_kubernetes_pod_node_name] | |
| action: replace | |
| target_label: instance | |
| # metrics from pod endpoints on /metrics over https via the master proxy | |
| # set annotation (prometheus.io/scrape: true) to enable | |
| # Example: kubectl annotate pod mypod prometheus.io/scrape=true | |
| - job_name: 'kubernetes-pods' | |
| kubernetes_sd_configs: | |
| - role: pod | |
| api_server: https://master_ip | |
| tls_config: | |
| insecure_skip_verify: true | |
| basic_auth: | |
| username: admin | |
| password: api_password | |
| scheme: https | |
| tls_config: | |
| insecure_skip_verify: true | |
| basic_auth: | |
| username: admin | |
| password: api_password | |
| relabel_configs: | |
| - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape] | |
| action: keep | |
| regex: true | |
| - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path] | |
| action: replace | |
| regex: () | |
| target_label: __meta_kubernetes_pod_annotation_prometheus_io_path | |
| replacement: /metrics | |
| - source_labels: [__meta_kubernetes_namespace, __meta_kubernetes_pod_name, __meta_kubernetes_pod_container_port_number, __meta_kubernetes_pod_annotation_prometheus_io_path] | |
| target_label: __metrics_path__ | |
| regex: (.+);(.+);(.+);(.+) | |
| replacement: /api/v1/namespaces/$1/pods/$2:$3/proxy$4 | |
| - target_label: __address__ | |
| replacement: master_ip:443 | |
| - action: labelmap | |
| regex: __meta_kubernetes_pod_label_(.+) | |
| - source_labels: [__meta_kubernetes_namespace] | |
| action: replace | |
| target_label: kubernetes_namespace | |
| - source_labels: [__meta_kubernetes_pod_name] | |
| action: replace | |
| target_label: kubernetes_pod_name | |
| - source_labels: [__meta_kubernetes_pod_node_name] | |
| action: replace | |
| target_label: instance |
Hi, thank you for the config.
Could you provide also Grafana dashboards for this metrics.
May I ask why do you use basic_auth twice for each job? one inside role, one outside?
@aghassabian The one inside role is used to authenticate with the API for discovery. The one outside is used to authenticate against the targets.
I have a K8 cluster which some applications running which exposes Prometheus end points. I want to discover all the endpoints automatically from another K8 cluster where Prometheus is installed.How can I find the API password for my k8 cluster and there is no TLS config(certificates) for HTTPS. How is it working without certificates?
I am getting below error for few endpoints;
Get https://X.X.X.X:443/api/v1/namespaces/kube-system/pods/kube-dns-6b4f4b544c-qwxxz:10053/proxy/metrics: context deadline exceeded
Get https://18.222.213.34:443/api/v1/namespaces/kube-system/pods/kube-dns-6b4f4b544c-qwxxz:53/proxy/metrics: context deadline exceeded
Get https://18.222.213.34:443/api/v1/namespaces/kube-system/pods/kube-dns-6b4f4b544c-88mrw:10053/proxy/metrics: context deadline exceeded
Get https://18.222.213.34:443/api/v1/namespaces/kube-system/pods/kube-dns-6b4f4b544c-88mrw:53/proxy/metrics: context deadline exceeded
I was getting the same, expose them over Nodeport, when services with cluster-ip are being proxied its timing out.
in the kubernetes-service-endpoints job,
if you are using Prometheus Operator and ServiceMonitor like me, you might want to use this relabelling below instead:
- source_labels:
- __meta_kubernetes_namespace
- __meta_kubernetes_pod_name
- __meta_kubernetes_pod_container_port_number
- __meta_kubernetes_service_annotation_prometheus_io_path
target_label: __metrics_path__
regex: (.+);(.+);(.+);(.+)
replacement: /api/v1/namespaces/$1/pods/$2:$3/proxy$4because ServiceMonitor ultimately trying to scrape endpoints not service.
Hi,
I don't have metrics server running inside my k3s cluster. would i still be able to get the metrics?
Hi,
I am trying this config but getting the error.
level=error ts=2020-07-21T17:02:20.618Z caller=klog.go:94 component=k8s_client_runtime func=ErrorDepth msg="/app/discovery/kubernetes/kubernetes.go:333: Failed to list *v1.Node: Get https://master-server-ip/api/v1/nodes?limit=500&resourceVersion=0: dial tcp master-server-ip:443: connect: connection refused"
not sure if i am missing anything. any help would be greatly appreciated.
I am still seeing "server returned HTTP status 400 Bad Request" for pods and services. "/proxy/metrics".
When I test this kind of prometheus configuration inside a k8s cluster, I have a 401 HTTP code.