2

私は Minikube を使用しており、Heapster を Grafana と Influxdb で構成しようとしています。こちらの手順に従い、監視グラファナ サービスを除いて、すべての ReplicationController、Pod、およびサービスが正常に作成されました。

$ kubectl get svc --namespace=kube-system
NAME                   CLUSTER-IP   EXTERNAL-IP   PORT(S)             AGE
kube-dns               10.0.0.10    <none>        53/UDP,53/TCP       4d
kubernetes-dashboard   10.0.0.122   <nodes>       80/TCP              12m
monitoring-influxdb    10.0.0.66    <nodes>       8083/TCP,8086/TCP   1h

$ kubectl get rc --namespace=kube-system
NAME                   DESIRED   CURRENT   AGE
heapster               1         1         1h
influxdb-grafana       1         1         34m
kubernetes-dashboard   1         1         13m

$ kubectl get po --namespace=kube-system
NAME                            READY     STATUS    RESTARTS   AGE
heapster-hrgv3                  1/1       Running   1          1h
influxdb-grafana-9pqv8          2/2       Running   0          34m
kube-addon-manager-minikubevm   1/1       Running   6          4d
kubernetes-dashboard-rrpes      1/1       Running   0          13m

$ kubectl cluster-info
Kubernetes master is running at https://192.168.99.100:8443
kubernetes-dashboard is running at https://192.168.99.100:8443/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard

タイプを追加するために grafana-service.yaml のみを変更しました: NodePort:

apiVersion: v1
kind: Service
metadata:
  labels:
    kubernetes.io/cluster-service: 'true'
    kubernetes.io/name: monitoring-grafana
  name: monitoring-grafana
  namespace: kube-system
spec:
  ports:
  - port: 80
    targetPort: 3000
  selector:
    name: influxGrafana
  type: NodePort

kubectl create -f grafana-service.yaml と入力すると、Kubernetes がサービスを正常に作成しているように見えますが、実際にはそうではありません。それは単にそれを作成し、10 秒後に消えます。

$ kubectl create -f grafana-service.yaml
You have exposed your service on an external port on all nodes in your
cluster.  If you want to expose this service to the external internet, you may
need to set up firewall rules for the service port(s) (tcp:30357) to serve traffic.

See http://releases.k8s.io/release-1.3/docs/user-guide/services-firewalls.md for more details.
service "monitoring-grafana" created
$ kubectl get svc --namespace=kube-system
NAME                   CLUSTER-IP   EXTERNAL-IP   PORT(S)             AGE
kube-dns               10.0.0.10    <none>        53/UDP,53/TCP       4d
kubernetes-dashboard   10.0.0.122   <nodes>       80/TCP              20m
monitoring-grafana     10.0.0.251   <nodes>       80/TCP              3s
monitoring-influxdb    10.0.0.66    <nodes>       8083/TCP,8086/TCP   1h
$ kubectl get svc --namespace=kube-system
NAME                   CLUSTER-IP   EXTERNAL-IP   PORT(S)             AGE
kube-dns               10.0.0.10    <none>        53/UDP,53/TCP       4d
kubernetes-dashboard   10.0.0.122   <nodes>       80/TCP              20m
monitoring-influxdb    10.0.0.66    <nodes>       8083/TCP,8086/TCP   1h

コンテナー (InfluxDB、Grafana、および Heapter) のログを既に確認しましたが、すべて問題ないようです。

$ kubectl logs influxdb-grafana-9pqv8 grafana --namespace=kube-system
Influxdb service URL is provided.
Using the following URL for InfluxDB: http://monitoring-influxdb:8086
Using the following backend access mode for InfluxDB: proxy
Starting Grafana in the background
Waiting for Grafana to come up...
2016/08/09 16:51:04 [I] Starting Grafana
2016/08/09 16:51:04 [I] Version: 2.6.0, Commit: v2.6.0, Build date: 2015-12-14 14:18:01 +0000 UTC
2016/08/09 16:51:04 [I] Configuration Info
Config files:
  [0]: /usr/share/grafana/conf/defaults.ini
  [1]: /etc/grafana/grafana.ini
Command lines overrides:
  [0]: default.paths.data=/var/lib/grafana
  [1]: default.paths.logs=/var/log/grafana
    Environment variables used:
  [0]: GF_SERVER_HTTP_PORT=3000
  [1]: GF_SERVER_ROOT_URL=/api/v1/proxy/namespaces/kube-system/services/monitoring-grafana/
  [2]: GF_AUTH_ANONYMOUS_ENABLED=true
  [3]: GF_AUTH_ANONYMOUS_ORG_ROLE=Admin
  [4]: GF_AUTH_BASIC_ENABLED=false
Paths:
  home: /usr/share/grafana
  data: /var/lib/grafana
  logs: /var/log/grafana

2016/08/09 16:51:04 [I] Database: sqlite3
2016/08/09 16:51:04 [I] Migrator: Starting DB migration
2016/08/09 16:51:04 [I] Migrator: exec migration id: create migration_log table
2016/08/09 16:51:04 [I] Migrator: exec migration id: create user table
2016/08/09 16:51:04 [I] Migrator: exec migration id: add unique index user.login
2016/08/09 16:51:04 [I] Migrator: exec migration id: add unique index user.email
2016/08/09 16:51:04 [I] Migrator: exec migration id: drop index UQE_user_login - v1
2016/08/09 16:51:04 [I] Migrator: exec migration id: drop index UQE_user_email - v1
2016/08/09 16:51:04 [I] Migrator: exec migration id: Rename table user to user_v1 - v1
2016/08/09 16:51:04 [I] Migrator: exec migration id: create user table v2
2016/08/09 16:51:04 [I] Migrator: exec migration id: create index UQE_user_login - v2
2016/08/09 16:51:04 [I] Migrator: exec migration id: create index UQE_user_email - v2
2016/08/09 16:51:04 [I] Migrator: exec migration id: copy data_source v1 to v2
2016/08/09 16:51:04 [I] Migrator: exec migration id: Drop old table user_v1
2016/08/09 16:51:04 [I] Migrator: exec migration id: create temp user table v1-7
2016/08/09 16:51:04 [I] Migrator: exec migration id: create index IDX_temp_user_email - v1-7
2016/08/09 16:51:04 [I] Migrator: exec migration id: create index IDX_temp_user_org_id - v1-7
2016/08/09 16:51:04 [I] Migrator: exec migration id: create index IDX_temp_user_code - v1-7
2016/08/09 16:51:04 [I] Migrator: exec migration id: create index IDX_temp_user_status - v1-7
2016/08/09 16:51:04 [I] Migrator: exec migration id: create star table
2016/08/09 16:51:04 [I] Migrator: exec migration id: add unique index star.user_id_dashboard_id
2016/08/09 16:51:04 [I] Migrator: exec migration id: create org table v1
2016/08/09 16:51:04 [I] Migrator: exec migration id: create index UQE_org_name - v1
2016/08/09 16:51:04 [I] Migrator: exec migration id: create org_user table v1
2016/08/09 16:51:04 [I] Migrator: exec migration id: create index IDX_org_user_org_id - v1
2016/08/09 16:51:04 [I] Migrator: exec migration id: create index UQE_org_user_org_id_user_id - v1
2016/08/09 16:51:04 [I] Migrator: exec migration id: copy data account to org
2016/08/09 16:51:04 [I] Migrator: skipping migration id: copy data account to org, condition not fulfilled
2016/08/09 16:51:04 [I] Migrator: exec migration id: copy data account_user to org_user
2016/08/09 16:51:04 [I] Migrator: skipping migration id: copy data account_user to org_user, condition not fulfilled
2016/08/09 16:51:04 [I] Migrator: exec migration id: Drop old table account
2016/08/09 16:51:04 [I] Migrator: exec migration id: Drop old table account_user
2016/08/09 16:51:04 [I] Migrator: exec migration id: create dashboard table
2016/08/09 16:51:04 [I] Migrator: exec migration id: add index dashboard.account_id
2016/08/09 16:51:04 [I] Migrator: exec migration id: add unique index dashboard_account_id_slug
2016/08/09 16:51:04 [I] Migrator: exec migration id: create dashboard_tag table
2016/08/09 16:51:04 [I] Migrator: exec migration id: add unique index dashboard_tag.dasboard_id_term
2016/08/09 16:51:04 [I] Migrator: exec migration id: drop index UQE_dashboard_tag_dashboard_id_term - v1
2016/08/09 16:51:04 [I] Migrator: exec migration id: Rename table dashboard to dashboard_v1 - v1
2016/08/09 16:51:04 [I] Migrator: exec migration id: create dashboard v2
2016/08/09 16:51:04 [I] Migrator: exec migration id: create index IDX_dashboard_org_id - v2
2016/08/09 16:51:04 [I] Migrator: exec migration id: create index UQE_dashboard_org_id_slug - v2
2016/08/09 16:51:04 [I] Migrator: exec migration id: copy dashboard v1 to v2
2016/08/09 16:51:04 [I] Migrator: exec migration id: drop table dashboard_v1
2016/08/09 16:51:04 [I] Migrator: exec migration id: alter dashboard.data to mediumtext v1
2016/08/09 16:51:04 [I] Migrator: exec migration id: create data_source table
2016/08/09 16:51:04 [I] Migrator: exec migration id: add index data_source.account_id
2016/08/09 16:51:04 [I] Migrator: exec migration id: add unique index data_source.account_id_name
2016/08/09 16:51:04 [I] Migrator: exec migration id: drop index IDX_data_source_account_id - v1
2016/08/09 16:51:04 [I] Migrator: exec migration id: drop index UQE_data_source_account_id_name - v1
2016/08/09 16:51:04 [I] Migrator: exec migration id: Rename table data_source to data_source_v1 - v1
2016/08/09 16:51:04 [I] Migrator: exec migration id: create data_source table v2
2016/08/09 16:51:04 [I] Migrator: exec migration id: create index IDX_data_source_org_id - v2
2016/08/09 16:51:04 [I] Migrator: exec migration id: create index UQE_data_source_org_id_name - v2
2016/08/09 16:51:04 [I] Migrator: exec migration id: copy data_source v1 to v2
2016/08/09 16:51:04 [I] Migrator: exec migration id: Drop old table data_source_v1 #2
2016/08/09 16:51:04 [I] Migrator: exec migration id: Add column with_credentials
2016/08/09 16:51:04 [I] Migrator: exec migration id: create api_key table
2016/08/09 16:51:04 [I] Migrator: exec migration id: add index api_key.account_id
2016/08/09 16:51:04 [I] Migrator: exec migration id: add index api_key.key
2016/08/09 16:51:04 [I] Migrator: exec migration id: add index api_key.account_id_name
2016/08/09 16:51:04 [I] Migrator: exec migration id: drop index IDX_api_key_account_id - v1
2016/08/09 16:51:04 [I] Migrator: exec migration id: drop index UQE_api_key_key - v1
2016/08/09 16:51:04 [I] Migrator: exec migration id: drop index UQE_api_key_account_id_name - v1
2016/08/09 16:51:04 [I] Migrator: exec migration id: Rename table api_key to api_key_v1 - v1
2016/08/09 16:51:04 [I] Migrator: exec migration id: create api_key table v2
2016/08/09 16:51:04 [I] Migrator: exec migration id: create index IDX_api_key_org_id - v2
2016/08/09 16:51:04 [I] Migrator: exec migration id: create index UQE_api_key_key - v2
2016/08/09 16:51:04 [I] Migrator: exec migration id: create index UQE_api_key_org_id_name - v2
2016/08/09 16:51:04 [I] Migrator: exec migration id: copy api_key v1 to v2
2016/08/09 16:51:04 [I] Migrator: exec migration id: Drop old table api_key_v1
2016/08/09 16:51:04 [I] Migrator: exec migration id: create dashboard_snapshot table v4
2016/08/09 16:51:04 [I] Migrator: exec migration id: drop table dashboard_snapshot_v4 #1
2016/08/09 16:51:04 [I] Migrator: exec migration id: create dashboard_snapshot table v5 #2
2016/08/09 16:51:04 [I] Migrator: exec migration id: create index UQE_dashboard_snapshot_key - v5
2016/08/09 16:51:04 [I] Migrator: exec migration id: create index UQE_dashboard_snapshot_delete_key - v5
2016/08/09 16:51:04 [I] Migrator: exec migration id: create index IDX_dashboard_snapshot_user_id - v5
2016/08/09 16:51:04 [I] Migrator: exec migration id: alter dashboard_snapshot to mediumtext v2
2016/08/09 16:51:04 [I] Migrator: exec migration id: create quota table v1
2016/08/09 16:51:04 [I] Migrator: exec migration id: create index UQE_quota_org_id_user_id_target - v1
2016/08/09 16:51:04 [I] Created default admin user: admin
2016/08/09 16:51:04 [I] Listen: http://0.0.0.0:3000/api/v1/proxy/namespaces/kube-system/services/monitoring-grafana
.Grafana is up and running.
Creating default influxdb datasource...
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   242  100    37  100   205   2222  12314 --:--:-- --:--:-- --:--:-- 12812
HTTP/1.1 200 OK
Content-Type: application/json; charset=UTF-8
Set-Cookie: grafana_sess=5d74e6fdfa244c4c; Path=/api/v1/proxy/namespaces/kube-system/services/monitoring-grafana; HttpOnly
Date: Tue, 09 Aug 2016 16:51:06 GMT
Content-Length: 37

{"id":1,"message":"Datasource added"}
Importing default dashboards...
Importing /dashboards/cluster.json ...
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 71639  100    49  100 71590    376   537k --:--:-- --:--:-- --:--:--  541k
HTTP/1.1 100 Continue

HTTP/1.1 200 OK
Content-Type: application/json; charset=UTF-8
Set-Cookie: grafana_sess=b7bc3ca23c09d7b3; Path=/api/v1/proxy/namespaces/kube-system/services/monitoring-grafana; HttpOnly
Date: Tue, 09 Aug 2016 16:51:06 GMT
Content-Length: 49

{"slug":"cluster","status":"success","version":0}
Done importing /dashboards/cluster.json
Importing /dashboards/pods.json ...
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 32141  100    46  100 32095   2476  1687k --:--:-- --:--:-- --:--:-- 1741k
HTTP/1.1 100 Continue

HTTP/1.1 200 OK
Content-Type: application/json; charset=UTF-8
Set-Cookie: grafana_sess=79de9b266893d792; Path=/api/v1/proxy/namespaces/kube-system/services/monitoring-grafana; HttpOnly
Date: Tue, 09 Aug 2016 16:51:06 GMT
Content-Length: 46

{"slug":"pods","status":"success","version":0}
Done importing /dashboards/pods.json

Bringing Grafana back to the foreground
exec /usr/sbin/grafana-server --homepath=/usr/share/grafana --config=/etc/grafana/grafana.ini cfg:default.paths.data=/var/lib/grafana cfg:default.paths.logs=/var/log/grafana

$ kubectl logs influxdb-grafana-9pqv8 influxdb --namespace=kube-system

 8888888           .d888 888                   8888888b.  888888b.
   888            d88P"  888                   888  "Y88b 888  "88b
   888            888    888                   888    888 888  .88P
   888   88888b.  888888 888 888  888 888  888 888    888 8888888K.
   888   888 "88b 888    888 888  888  Y8bd8P' 888    888 888  "Y88b
   888   888  888 888    888 888  888   X88K   888    888 888    888
   888   888  888 888    888 Y88b 888 .d8""8b. 888  .d88P 888   d88P
 8888888 888  888 888    888  "Y88888 888  888 8888888P"  8888888P"

2016/08/09 16:51:04 InfluxDB starting, version 0.9.4.1, branch 0.9.4, commit c4f85f84765e27bfb5e58630d0dea38adeacf543
2016/08/09 16:51:04 Go version go1.5, GOMAXPROCS set to 1
2016/08/09 16:51:04 Using configuration at: /etc/influxdb.toml
[metastore] 2016/08/09 16:51:04 Using data dir: /data/meta
[metastore] 2016/08/09 16:51:04 Node at localhost:8088 [Follower]
[metastore] 2016/08/09 16:51:05 Node at localhost:8088 [Leader]. peers=[localhost:8088]
[metastore] 2016/08/09 16:51:05 Created local node: id=1, host=localhost:8088
[monitor] 2016/08/09 16:51:05 Starting monitor system
[monitor] 2016/08/09 16:51:05 'build' registered for diagnostics monitoring
[monitor] 2016/08/09 16:51:05 'runtime' registered for diagnostics monitoring
[monitor] 2016/08/09 16:51:05 'network' registered for diagnostics monitoring
[monitor] 2016/08/09 16:51:05 'system' registered for diagnostics monitoring
[store] 2016/08/09 16:51:05 Using data dir: /data/data
[handoff] 2016/08/09 16:51:05 Starting hinted handoff service
[handoff] 2016/08/09 16:51:05 Using data dir: /data/hh
[tcp] 2016/08/09 16:51:05 Starting cluster service
[shard-precreation] 2016/08/09 16:51:05 Starting precreation service with check interval of 10m0s, advance period of 30m0s
[snapshot] 2016/08/09 16:51:05 Starting snapshot service
[copier] 2016/08/09 16:51:05 Starting copier service
[admin] 2016/08/09 16:51:05 Starting admin service
[admin] 2016/08/09 16:51:05 Listening on HTTP: [::]:8083
[continuous_querier] 2016/08/09 16:51:05 Starting continuous query service
[httpd] 2016/08/09 16:51:05 Starting HTTP service
[httpd] 2016/08/09 16:51:05 Authentication enabled: false
[httpd] 2016/08/09 16:51:05 Listening on HTTP: [::]:8086
[retention] 2016/08/09 16:51:05 Starting retention policy enforcement service with check interval of 30m0s
[run] 2016/08/09 16:51:05 Listening for signals
[monitor] 2016/08/09 16:51:05 Storing statistics in database '_internal' retention policy '', at interval 10s
[metastore] 2016/08/09 16:51:05 database '_internal' created
[metastore] 2016/08/09 16:51:05 retention policy 'default' for database '_internal' created
[metastore] 2016/08/09 16:51:05 retention policy 'monitor' for database '_internal' created
2016/08/09 16:51:05 Sending anonymous usage statistics to m.influxdb.com
[wal] 2016/08/09 16:51:15 WAL starting with 30720 ready series size, 0.50 compaction threshold, and 20971520 partition size threshold
[wal] 2016/08/09 16:51:15 WAL writing to /data/wal/_internal/monitor/1
[wal] 2016/08/09 16:51:20 Flush due to idle. Flushing 1 series with 1 points and 143 bytes from partition 1
[wal] 2016/08/09 16:51:20 write to index of partition 1 took 496.995µs
[wal] 2016/08/09 16:51:30 Flush due to idle. Flushing 5 series with 5 points and 364 bytes from partition 1
[wal] 2016/08/09 16:51:30 write to index of partition 1 took 436.627µs
[wal] 2016/08/09 16:51:40 Flush due to idle. Flushing 5 series with 5 points and 364 bytes from partition 1
[wal] 2016/08/09 16:51:40 write to index of partition 1 took 360.64µs
[wal] 2016/08/09 16:51:50 Flush due to idle. Flushing 5 series with 5 points and 364 bytes from partition 1
[wal] 2016/08/09 16:51:50 write to index of partition 1 took 383.191µs
[wal] 2016/08/09 16:52:00 Flush due to idle. Flushing 5 series with 5 points and 364 bytes from partition 1
[wal] 2016/08/09 16:52:00 write to index of partition 1 took 362.55µs
[wal] 2016/08/09 16:52:10 Flush due to idle. Flushing 5 series with 5 points and 364 bytes from partition 1
[wal] 2016/08/09 16:52:10 write to index of partition 1 took 337.138µs
[wal] 2016/08/09 16:52:20 Flush due to idle. Flushing 5 series with 5 points and 364 bytes from partition 1
[wal] 2016/08/09 16:52:20 write to index of partition 1 took 356.146µs
[wal] 2016/08/09 16:52:30 Flush due to idle. Flushing 5 series with 5 points and 364 bytes from partition 1
[wal] 2016/08/09 16:52:30 write to index of partition 1 took 398.484µs
[wal] 2016/08/09 16:52:40 Flush due to idle. Flushing 5 series with 5 points and 364 bytes from partition 1
[wal] 2016/08/09 16:52:40 write to index of partition 1 took 473.95µs
[wal] 2016/08/09 16:52:50 Flush due to idle. Flushing 5 series with 5 points and 364 bytes from partition 1
[wal] 2016/08/09 16:52:50 write to index of partition 1 took 255.661µs
[wal] 2016/08/09 16:53:00 Flush due to idle. Flushing 5 series with 5 points and 364 bytes from partition 1
[wal] 2016/08/09 16:53:00 write to index of partition 1 took 352.629µs
[wal] 2016/08/09 16:53:10 Flush due to idle. Flushing 5 series with 5 points and 364 bytes from partition 1
[wal] 2016/08/09 16:53:10 write to index of partition 1 took 373.52µs
[http] 2016/08/09 16:53:12 172.17.0.2 - root [09/Aug/2016:16:53:12 +0000] GET /ping HTTP/1.1 204 0 - heapster/1.2.0-beta.0 c2197fd8-5e51-11e6-8001-000000000000 80.938µs
[http] 2016/08/09 16:53:12 172.17.0.2 - root [09/Aug/2016:16:53:12 +0000] POST /write?consistency=&db=k8s&precision=&rp=default HTTP/1.1 404 50 - heapster/1.2.0-beta.0 c21e2912-5e51-11e6-8002-000000000000 18.498818ms
[wal] 2016/08/09 16:53:20 Flush due to idle. Flushing 6 series with 6 points and 408 bytes from partition 1
[wal] 2016/08/09 16:53:20 write to index of partition 1 took 463.429µs
[wal] 2016/08/09 16:53:30 Flush due to idle. Flushing 6 series with 6 points and 408 bytes from partition 1
[wal] 2016/08/09 16:53:30 write to index of partition 1 took 486.92µs
[wal] 2016/08/09 16:53:40 Flush due to idle. Flushing 6 series with 6 points and 408 bytes from partition 1
[wal] 2016/08/09 16:53:40 write to index of partition 1 took 489.395µs
[wal] 2016/08/09 16:53:50 Flush due to idle. Flushing 6 series with 6 points and 408 bytes from partition 1
[wal] 2016/08/09 16:53:50 write to index of partition 1 took 502.615µs
[wal] 2016/08/09 16:54:00 Flush due to idle. Flushing 6 series with 6 points and 408 bytes from partition 1
[wal] 2016/08/09 16:54:00 write to index of partition 1 took 526.287µs
[http] 2016/08/09 16:54:05 172.17.0.2 - root [09/Aug/2016:16:54:05 +0000] GET /ping HTTP/1.1 204 0 - heapster/1.2.0-beta.0 e183bf22-5e51-11e6-8003-000000000000 77.559µs
[query] 2016/08/09 16:54:05 CREATE DATABASE k8s
[metastore] 2016/08/09 16:54:05 database 'k8s' created
[metastore] 2016/08/09 16:54:05 retention policy 'default' for database 'k8s' created
[http] 2016/08/09 16:54:05 172.17.0.2 - root [09/Aug/2016:16:54:05 +0000] GET /query?db=&q=CREATE+DATABASE+k8s HTTP/1.1 200 40 - heapster/1.2.0-beta.0 e183d606-5e51-11e6-8004-000000000000 1.435103ms
[wal] 2016/08/09 16:54:05 WAL starting with 30720 ready series size, 0.50 compaction threshold, and 20971520 partition size threshold
[wal] 2016/08/09 16:54:05 WAL writing to /data/wal/k8s/default/2
[http] 2016/08/09 16:54:05 172.17.0.2 - root [09/Aug/2016:16:54:05 +0000] POST /write?consistency=&db=k8s&precision=&rp=default HTTP/1.1 204 0 - heapster/1.2.0-beta.0 e1860e09-5e51-11e6-8005-000000000000 30.444828ms
[wal] 2016/08/09 16:54:10 Flush due to idle. Flushing 8 series with 8 points and 514 bytes from partition 1
[wal] 2016/08/09 16:54:10 write to index of partition 1 took 530.292µs
[wal] 2016/08/09 16:54:11 Flush due to idle. Flushing 261 series with 261 points and 4437 bytes from partition 1
[wal] 2016/08/09 16:54:11 write to index of partition 1 took 32.567355ms
[wal] 2016/08/09 16:54:20 Flush due to idle. Flushing 9 series with 9 points and 612 bytes from partition 1
[wal] 2016/08/09 16:54:20 write to index of partition 1 took 1.549305ms
[wal] 2016/08/09 16:54:30 Flush due to idle. Flushing 9 series with 9 points and 612 bytes from partition 1
[wal] 2016/08/09 16:54:30 write to index of partition 1 took 572.059µs
[wal] 2016/08/09 16:54:40 Flush due to idle. Flushing 9 series with 9 points and 612 bytes from partition 1
[wal] 2016/08/09 16:54:40 write to index of partition 1 took 580.618µs
[wal] 2016/08/09 16:54:50 Flush due to idle. Flushing 9 series with 9 points and 612 bytes from partition 1
[wal] 2016/08/09 16:54:50 write to index of partition 1 took 641.815µs
[wal] 2016/08/09 16:55:01 Flush due to idle. Flushing 9 series with 9 points and 612 bytes from partition 1
[wal] 2016/08/09 16:55:01 write to index of partition 1 took 385.986µs
[http] 2016/08/09 16:55:05 172.17.0.2 - root [09/Aug/2016:16:55:05 +0000] POST /write?consistency=&db=k8s&precision=&rp=default HTTP/1.1 204 0 - heapster/1.2.0-beta.0 05482b86-5e52-11e6-8006-000000000000 10.363919ms
[wal] 2016/08/09 16:55:10 Flush due to idle. Flushing 261 series with 261 points and 4437 bytes from partition 1
[wal] 2016/08/09 16:55:10 write to index of partition 1 took 19.304596ms
[wal] 2016/08/09 16:55:11 Flush due to idle. Flushing 9 series with 9 points and 612 bytes from partition 1
[wal] 2016/08/09 16:55:11 write to index of partition 1 took 638.219µs
[wal] 2016/08/09 16:55:21 Flush due to idle. Flushing 9 series with 9 points and 612 bytes from partition 1
[wal] 2016/08/09 16:55:21 write to index of partition 1 took 409.537µs
[wal] 2016/08/09 16:55:31 Flush due to idle. Flushing 9 series with 9 points and 612 bytes from partition 1
[wal] 2016/08/09 16:55:31 write to index of partition 1 took 442.186µs
[wal] 2016/08/09 16:55:41 Flush due to idle. Flushing 9 series with 9 points and 612 bytes from partition 1
[wal] 2016/08/09 16:55:41 write to index of partition 1 took 417.074µs
[wal] 2016/08/09 16:55:51 Flush due to idle. Flushing 9 series with 9 points and 612 bytes from partition 1
[wal] 2016/08/09 16:55:51 write to index of partition 1 took 434.209µs
[wal] 2016/08/09 16:56:01 Flush due to idle. Flushing 9 series with 9 points and 612 bytes from partition 1
[wal] 2016/08/09 16:56:01 write to index of partition 1 took 439.568µs
[http] 2016/08/09 16:56:05 172.17.0.2 - root [09/Aug/2016:16:56:05 +0000] POST /write?consistency=&db=k8s&precision=&rp=default HTTP/1.1 204 0 - heapster/1.2.0-beta.0 290b8b5e-5e52-11e6-8007-000000000000 5.954015ms
[wal] 2016/08/09 16:56:10 Flush due to idle. Flushing 261 series with 261 points and 4437 bytes from partition 1
[wal] 2016/08/09 16:56:10 write to index of partition 1 took 16.643255ms
[wal] 2016/08/09 16:56:11 Flush due to idle. Flushing 9 series with 9 points and 612 bytes from partition 1
[wal] 2016/08/09 16:56:11 write to index of partition 1 took 479.833µs
[wal] 2016/08/09 16:56:21 Flush due to idle. Flushing 9 series with 9 points and 612 bytes from partition 1
[wal] 2016/08/09 16:56:21 write to index of partition 1 took 631.107µs
[wal] 2016/08/09 16:56:31 Flush due to idle. Flushing 9 series with 9 points and 612 bytes from partition 1
[wal] 2016/08/09 16:56:31 write to index of partition 1 took 694.61µs
[wal] 2016/08/09 16:56:41 Flush due to idle. Flushing 9 series with 9 points and 612 bytes from partition 1
[wal] 2016/08/09 16:56:41 write to index of partition 1 took 708.474µs
[wal] 2016/08/09 16:56:51 Flush due to idle. Flushing 9 series with 9 points and 612 bytes from partition 1
[wal] 2016/08/09 16:56:51 write to index of partition 1 took 627.979µs
[wal] 2016/08/09 16:57:01 Flush due to idle. Flushing 9 series with 9 points and 612 bytes from partition 1

また、同じ結果で Kubernetes ダッシュボードからサービスを作成しようとしました。それはサービスを作成し、ほとんどすぐにサービスはありません。

巨大な投稿で申し訳ありません。あなたが私を助けてくれることを願っています。ありがとう。

編集

@Pixel_Elephant に感謝します。grafana-service.yaml と heapster-service.yaml の両方のファイルでラベル 'kubernetes.io/cluster-service: 'true'' を削除すると、サービスは存続する可能性があります。

もう 1 ステップ: influxdb-grafana-controller.yaml で次のように変更します。

- name: GF_SERVER_ROOT_URL
            value: /api/v1/proxy/namespaces/kube-system/services/monitoring-grafana/

為に

- name: GF_SERVER_ROOT_URL
            value: /

最終的にhttp://192.168.99.100/ < NODE_PORT>/の Grafana ダッシュボードにアクセスできました。

4

1 に答える 1

3

kubernetes.io/cluster-service: 'true'ラベルを取り外します。

https://github.com/kubernetes/kops/issues/13を参照してください

于 2016-08-09T18:11:35.197 に答える