2

Raft ストレージを有効にして 3 ノードの Vault クラスターをセットアップしようとしています。私は現在、readiness プローブ (liveness プローブも) が readiness プローブの失敗を返す理由に途方に暮れています: http: サーバーが HTTPS クライアントに HTTP 応答を返しました

「helm install vault hashicorp/vault --namespace vault -f override-values.yaml」に helm 3 を使用しています

global:
  enabled: true
  tlsDisable: false

injector:
  enabled: false

server:
  image:
    repository: "hashicorp/vault"
    tag: "1.5.5"

  resources:
    requests:
      memory: 1Gi
      cpu: 2000m
    limits:
      memory: 2Gi
      cpu: 2000m

  readinessProbe:
    enabled: true
    path: "/v1/sys/health?standbyok=true&sealedcode=204&uninitcode=204"
  livenessProbe:
    enabled: true
    path: "/v1/sys/health?standbyok=true"
    initialDelaySeconds: 60
    VAULT_CACERT: /vault/userconfig/tls-ca/ca.crt

  # extraVolumes is a list of extra volumes to mount. These will be exposed
  # to Vault in the path `/vault/userconfig/<name>/`.
  extraVolumes:
    # holds the cert file and the key file
     - type: secret
       name: tls-server
    # holds the ca certificate
    - type: secret
      name: tls-ca
    
  auditStorage:
    enabled: true

  standalone:
    enabled: false

  # Run Vault in "HA" mode.
  ha:
    enabled: true
    replicas: 3
    raft:
      enabled: true
      setNodeId: true

    config: |
        ui = true
        listener "tcp" {
          address = "[::]:8200"
          cluster_address = "[::]:8201"
          tls_cert_file = "/vault/userconfig/tls-server/tls.crt"
          tls_key_file = "/vault/userconfig/tls-server/tls.key"
          tls_ca_cert_file = "/vault/userconfig/tls-ca/ca.crt"
        }

        storage "raft" {
          path = "/vault/data"
          retry_join {
            leader_api_addr = "https://vault-0.vault-internal:8200"
            leader_ca_cert_file = "/vault/userconfig/tls-ca/ca.crt"
            leader_client_cert_file = "/vault/userconfig/tls-server/tls.crt"
            leader_client_key_file = "/vault/userconfig/tls-server/tls.key"
          }
          retry_join {
            leader_api_addr = "https://vault-1.vault-internal:8200"
            leader_ca_cert_file = "/vault/userconfig/tls-ca/ca.crt"
            leader_client_cert_file = "/vault/userconfig/tls-server/tls.crt"
            leader_client_key_file = "/vault/userconfig/tls-server/tls.key"
          }
          retry_join {
            leader_api_addr = "https://vault-2.vault-internal:8200"
            leader_ca_cert_file = "/vault/userconfig/tls-ca/ca.crt"
            leader_client_cert_file = "/vault/userconfig/tls-server/tls.crt"
            leader_client_key_file = "/vault/userconfig/tls-server/tls.key"
          }
        }

        service_registration "kubernetes" {}

# Vault UI
ui:
  enabled: true
  serviceType: "ClusterIP"
  serviceNodePort: null
  externalPort: 8200

describe pod vault-0 から戻る

Name:         vault-0
Namespace:    vault
Priority:     0
Node:         node4/10.211.55.7
Start Time:   Wed, 11 Nov 2020 15:06:47 +0700
Labels:       app.kubernetes.io/instance=vault
              app.kubernetes.io/name=vault
              component=server
              controller-revision-hash=vault-5c4b47bdc4
              helm.sh/chart=vault-0.8.0
              statefulset.kubernetes.io/pod-name=vault-0
              vault-active=false
              vault-initialized=false
              vault-perf-standby=false
              vault-sealed=true
              vault-version=1.5.5
Annotations:  <none>
Status:       Running
IP:           10.42.4.82
IPs:
  IP:           10.42.4.82
Controlled By:  StatefulSet/vault
Containers:
  vault:
    Container ID:  containerd://6dfde76051f44c22003cc02a880593792d304e74c56d717eef982e0e799672f2
    Image:         hashicorp/vault:1.5.5
    Image ID:      docker.io/hashicorp/vault@sha256:90cfeead29ef89fdf04383df9991754f4a54c43b2fb49ba9ff3feb713e5ef1be
    Ports:         8200/TCP, 8201/TCP, 8202/TCP
    Host Ports:    0/TCP, 0/TCP, 0/TCP
    Command:
      /bin/sh
      -ec
    Args:
      cp /vault/config/extraconfig-from-values.hcl /tmp/storageconfig.hcl;
      [ -n "${HOST_IP}" ] && sed -Ei "s|HOST_IP|${HOST_IP?}|g" /tmp/storageconfig.hcl;
      [ -n "${POD_IP}" ] && sed -Ei "s|POD_IP|${POD_IP?}|g" /tmp/storageconfig.hcl;
      [ -n "${HOSTNAME}" ] && sed -Ei "s|HOSTNAME|${HOSTNAME?}|g" /tmp/storageconfig.hcl;
      [ -n "${API_ADDR}" ] && sed -Ei "s|API_ADDR|${API_ADDR?}|g" /tmp/storageconfig.hcl;
      [ -n "${TRANSIT_ADDR}" ] && sed -Ei "s|TRANSIT_ADDR|${TRANSIT_ADDR?}|g" /tmp/storageconfig.hcl;
      [ -n "${RAFT_ADDR}" ] && sed -Ei "s|RAFT_ADDR|${RAFT_ADDR?}|g" /tmp/storageconfig.hcl;
      /usr/local/bin/docker-entrypoint.sh vault server -config=/tmp/storageconfig.hcl 
      
    State:          Running
      Started:      Wed, 11 Nov 2020 15:25:21 +0700
    Last State:     Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Wed, 11 Nov 2020 15:19:10 +0700
      Finished:     Wed, 11 Nov 2020 15:20:20 +0700
    Ready:          False
    Restart Count:  8
    Limits:
      cpu:     2
      memory:  2Gi
    Requests:
      cpu:      2
      memory:   1Gi
    Liveness:   http-get https://:8200/v1/sys/health%3Fstandbyok=true delay=60s timeout=3s period=5s #success=1 #failure=2
    Readiness:  http-get https://:8200/v1/sys/health%3Fstandbyok=true&sealedcode=204&uninitcode=204 delay=5s timeout=3s period=5s #success=1 #failure=2
    Environment:
      HOST_IP:               (v1:status.hostIP)
      POD_IP:                (v1:status.podIP)
      VAULT_K8S_POD_NAME:   vault-0 (v1:metadata.name)
      VAULT_K8S_NAMESPACE:  vault (v1:metadata.namespace)
      VAULT_ADDR:           https://127.0.0.1:8200
      VAULT_API_ADDR:       https://$(POD_IP):8200
      SKIP_CHOWN:           true
      SKIP_SETCAP:          true
      HOSTNAME:             vault-0 (v1:metadata.name)
      VAULT_CLUSTER_ADDR:   https://$(HOSTNAME).vault-internal:8201
      VAULT_RAFT_NODE_ID:   vault-0 (v1:metadata.name)
      HOME:                 /home/vault
      VAULT_CACERT:         /vault/userconfig/tls-ca/ca.crt
    Mounts:
      /home/vault from home (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from vault-token-lfgnj (ro)
      /vault/audit from audit (rw)
      /vault/config from config (rw)
      /vault/data from data (rw)
      /vault/userconfig/tls-ca from userconfig-tls-ca (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  data:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  data-vault-0
    ReadOnly:   false
  audit:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  audit-vault-0
    ReadOnly:   false
  config:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      vault-config
    Optional:  false
  userconfig-tls-ca:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  tls-ca
    Optional:    false
  home:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:     
    SizeLimit:  <unset>
  vault-token-lfgnj:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  vault-token-lfgnj
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason     Age                   From               Message
  ----     ------     ----                  ----               -------
  Normal   Scheduled  18m                   default-scheduler  Successfully assigned vault/vault-0 to node4
  Warning  Unhealthy  17m (x2 over 17m)     kubelet            Liveness probe failed: Get "https://10.42.4.82:8200/v1/sys/health?standbyok=true": http: server gave HTTP response to HTTPS client
  Normal   Killing    17m                   kubelet            Container vault failed liveness probe, will be restarted
  Normal   Pulled     17m (x2 over 18m)     kubelet            Container image "hashicorp/vault:1.5.5" already present on machine
  Normal   Created    17m (x2 over 18m)     kubelet            Created container vault
  Normal   Started    17m (x2 over 18m)     kubelet            Started container vault
  Warning  Unhealthy  13m (x56 over 18m)    kubelet            Readiness probe failed: Get "https://10.42.4.82:8200/v1/sys/health?standbyok=true&sealedcode=204&uninitcode=204": http: server gave HTTP response to HTTPS client
  Warning  BackOff    3m41s (x31 over 11m)  kubelet            Back-off restarting failed container

vault-0 からのログ

2020-11-12T05:50:43.554426582Z ==> Vault server configuration:
2020-11-12T05:50:43.554524646Z 
2020-11-12T05:50:43.554574639Z              Api Address: https://10.42.4.85:8200
2020-11-12T05:50:43.554586234Z                      Cgo: disabled
2020-11-12T05:50:43.554596948Z          Cluster Address: https://vault-0.vault-internal:8201
2020-11-12T05:50:43.554608637Z               Go Version: go1.14.7
2020-11-12T05:50:43.554678454Z               Listener 1: tcp (addr: "[::]:8200", cluster address: "[::]:8201", max_request_duration: "1m30s", max_request_size: "33554432", tls: "disabled")
2020-11-12T05:50:43.554693734Z                Log Level: info
2020-11-12T05:50:43.554703897Z                    Mlock: supported: true, enabled: false
2020-11-12T05:50:43.554713272Z            Recovery Mode: false
2020-11-12T05:50:43.554722579Z                  Storage: raft (HA available)
2020-11-12T05:50:43.554732788Z                  Version: Vault v1.5.5
2020-11-12T05:50:43.554769315Z              Version Sha: f5d1ddb3750e7c28e25036e1ef26a4c02379fc01
2020-11-12T05:50:43.554780425Z 
2020-11-12T05:50:43.672225223Z ==> Vault server started! Log data will stream in below:
2020-11-12T05:50:43.672519986Z 
2020-11-12T05:50:43.673078706Z 2020-11-12T05:50:43.543Z [INFO]  proxy environment: http_proxy= https_proxy= no_proxy=
2020-11-12T05:51:57.838970945Z ==> Vault shutdown triggered

Macで6ノードのランチャーk3sクラスターv1.19.3ks2を実行しています。

どんな助けでもいただければ幸いです

4

0 に答える 0