0

I have a setup of a Kubernetes cluster with one master and one worker node.

Traffic is being routed into the cluster by doing NAT from the host to the ingress-nginx service of type LoadBalancer, setup with MetalLb:

#!/bin/bash

iptables -t nat -A PREROUTING -p tcp -i eth0 --dport 80 -j DNAT --to-destination "$1":80
iptables -A FORWARD -p tcp -d "$1" --dport 80 -m state --state NEW,ESTABLISHED,RELATED -j ACCEPT

$1 is the external IP of the ingress-nginx.

apiVersion: v1
kind: Service
metadata:
  name: ingress-nginx
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
spec:
  type: LoadBalancer
  externalTrafficPolicy: Local
  ports:
    - name: http
      port: 80
      targetPort: 80
      protocol: TCP
    - name: https
      port: 443
      targetPort: 443
      protocol: TCP
  selector:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

At this point, if I am looking into the logs of ingress-nginx pod I am able to see the real source IP address.

The problem is when I check the logs of the downstream apps, which gets traffic from the ingress, the source IP is the IP of the ingress pod.

kind: Ingress
metadata:
  namespace: laurkyt
  name: laurkyt-ingress
  annotations:
    cert-manager.io/cluster-issuer: "letsencrypt-prod"
    nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
    nginx.ingress.kubernetes.io/affinity: cookie
    nginx.ingress.kubernetes.io/session-cookie-hash: sha1
    nginx.ingress.kubernetes.io/session-cookie-name: REALTIMESERVERID
    nginx.ingress.kubernetes.io/proxy-connect-timeout: "3600"
    nginx.ingress.kubernetes.io/proxy-read-timeout: "3600"
    nginx.ingress.kubernetes.io/proxy-send-timeout: "3600"
    nginx.ingress.kubernetes.io/send-timeout: "3600"
    nginx.ingress.kubernetes.io/proxy-body-size: 50m
    ingress.kubernetes.io/proxy-body-size: 50m
spec:
  tls:
  - hosts:
    - example.com
    - '*.example.com'
    secretName: wildcard-example-com
  rules:
  - host: example.com
    http:
      paths:
      - backend:
          serviceName: laurkyt
          servicePort: 443
apiVersion: v1
kind: Service
metadata:
  namespace: laurkyt
  name: laurkyt
  labels:
    app: laurkyt
spec:
  externalTrafficPolicy: Local
  ports:
  - port: 80
    targetPort: 80
    name: "http"
  - port: 443
    targetPort: 443
    name: "https"
  selector:
    app: laurkyt
    tier: laurkyt

Does anyone knows what I am missing in order to preserve the source IP at backend pods too?

4

1 に答える 1