k8s中基于nginx-ingress的灰度发布是什么

k8s中基于 nginx-ingress 的灰度发布是什么,相信很多没有经验的人对此束手无策,为此本文总结了问题出现的原因和解决方法,通过这篇文章希望你能解决这个问题。

10年积累的成都网站设计、成都网站制作、外贸网站建设经验,可以快速应对客户对网站的新想法和需求。提供各种问题对应的解决方案。让选择我们的客户得到更好、更有力的网络服务。我虽然不认识你,你也不认识我。但先网站制作后付款的网站建设流程,更有沐川免费网站建设让你可以放心的选择与我们合作。

假设当前线上环境我们已经有一套服务 app-old 对外提供 7 层服务,此时我们修复了一些问题,需要灰度发布上线一个新的版本 app-new,但是我们又不希望简单直接地将所有客户端流量切换到新版本 app-new 中,而是希望仅仅切换 20% 的流量到新版本 app-new 中,待运行一段时间稳定,将所有的流量切换到 app-new 服务中后,再平滑地下线掉 app-old 服务。

针对以上多种不同的应用发布需求,K8S Ingress Controller 支持了多种流量切分方式:

  1. 基于 Request Header 的流量切分,适用于灰度发布以及 AB 测试场景

  2. 基于 Cookie 的流量切分,适用于灰度发布以及 AB 测试场景

  3. 基于 Query Param 的流量切分,适用于灰度发布以及 AB 测试场景

  4. 基于服务权重的流量切分,适用于蓝绿发布场景

以下测试基于服务权重的流量切分,也可以将nginx.ingress.kubernetes.io/canary-weight: "30"改为基于 header 的流量切分。

准备老版本程序

老版本程序 app-old

app-old.yaml

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: app-old
spec:
  replicas: 2
  selector:
    matchLabels:
      run: app-old
  template:
    metadata:
      labels:
        run: app-old
    spec:
      containers:
      - image: zouhl/app:v2.1
        imagePullPolicy: Always
        name: app-old
        ports:
        - containerPort: 80
          protocol: TCP
      restartPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
  name: app-old
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
  selector:
    run: app-old
  sessionAffinity: None
  type: NodePor

apiVersion: extensions/v1beta1 kind: Deployment metadata: name: app-old spec: replicas: 2 selector: matchLabels: run: app-old template: metadata: labels: run: app-old spec: containers: - image: zouhl/app:v2.1 imagePullPolicy: Always name: app-old ports: - containerPort: 80 protocol: TCP restartPolicy: Always --- apiVersion: v1 kind: Service metadata: name: app-old spec: ports: - port: 80 protocol: TCP targetPort: 80 selector: run: app-old sessionAffinity: None type: NodePor

老版本的 ingress

app-v1.yaml

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: my-app
  labels:
    app: my-app
  annotations:
    kubernetes.io/ingress.class: nginx
  namespace: default
spec:
  rules:
    - host: test.192.168.2.20.xip.io
      http:
        paths:
          - backend:
              serviceName: app-old
              servicePort: 80
            path: /

apiVersion: extensions/v1beta1 kind: Ingress metadata: name: my-app labels: app: my-app annotations: kubernetes.io/ingress.class: nginx namespace: default spec: rules: - host: test.192.168.2.20.xip.io http: paths: - backend: serviceName: app-old servicePort: 80 path: /

在 k8s 中创建

kubectl create -f app-old.yaml
kubectl create -f app-v1.yaml

kubectl create -f app-old.yaml kubectl create -f app-v1.yaml

装备新版本程序

新版本app-new.yaml

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: app-new
spec:
  replicas: 2
  selector:
    matchLabels:
      run: app-new
  template:
    metadata:
      labels:
        run: app-new
    spec:
      containers:
      - image: zouhl/app:v2.2
        imagePullPolicy: Always
        name: app-new
        ports:
        - containerPort: 80
          protocol: TCP
      restartPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
  name: app-new
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
  selector:
    run: app-new
  sessionAffinity: None
  type: NodePort

apiVersion: extensions/v1beta1 kind: Deployment metadata: name: app-new spec: replicas: 2 selector: matchLabels: run: app-new template: metadata: labels: run: app-new spec: containers: - image: zouhl/app:v2.2 imagePullPolicy: Always name: app-new ports: - containerPort: 80 protocol: TCP restartPolicy: Always --- apiVersion: v1 kind: Service metadata: name: app-new spec: ports: - port: 80 protocol: TCP targetPort: 80 selector: run: app-new sessionAffinity: None type: NodePort

新版本 canary-ingress

app-v2-canary.yaml

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: my-app-canary
  labels:
    app: my-app
  annotations:
    kubernetes.io/ingress.class: nginx
    nginx.ingress.kubernetes.io/canary: "true"
    nginx.ingress.kubernetes.io/canary-weight: "30"
  namespace: default
spec:
  rules:
    - host: test.192.168.2.20.xip.io
      http:
        paths:
          - backend:
              serviceName: app-new
              servicePort: 80
            path: /

apiVersion: extensions/v1beta1 kind: Ingress metadata: name: my-app-canary labels: app: my-app annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/canary: "true" nginx.ingress.kubernetes.io/canary-weight: "30" namespace: default spec: rules: - host: test.192.168.2.20.xip.io http: paths: - backend: serviceName: app-new servicePort: 80 path: /

新版本 ingress yaml

app-v2.yaml

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: my-app
  labels:
    app: my-app
  annotations:
    kubernetes.io/ingress.class: nginx
  namespace: default
spec:
  rules:
    - host: test.192.168.2.20.xip.io
      http:
        paths:
          - backend:
              serviceName: app-new
              servicePort: 80
            path: /

apiVersion: extensions/v1beta1 kind: Ingress metadata: name: my-app labels: app: my-app annotations: kubernetes.io/ingress.class: nginx namespace: default spec: rules: - host: test.192.168.2.20.xip.io http: paths: - backend: serviceName: app-new servicePort: 80 path: /

发布流程

$ tree                                              
.
├── app-new.yaml
├── app-old.yaml
├── app-v1.yaml
├── app-v2-canary.yaml
└── app-v2.yaml

$ tree . ├── app-new.yaml ├── app-old.yaml ├── app-v1.yaml ├── app-v2-canary.yaml └── app-v2.yaml

app-v1 已经发布了,现在灰度发布第二版,权重为 30%,nginx.ingress.kubernetes.io/canary-weight: "30",更多参数参考github

kubectl create -f app-new.yaml
kubectl create -f app-v2-canary.yaml

kubectl create -f app-new.yaml kubectl create -f app-v2-canary.yaml

检查

$ kubectl get ingresses.extensions   
NAME            HOSTS                       ADDRESS   PORTS   AGE
app-ingress     www.example.com                       80      109m
my-app          test.192.168.2.20.xip.io              80      25m
my-app-canary   test.192.168.2.20.xip.io              80      1s
nginx-test      nginx.192.168.2.20.xip.io             80      3h22m

$ kubectl get ingresses.extensions NAME HOSTS ADDRESS PORTS AGE app-ingress www.example.com 80 109m my-app test.192.168.2.20.xip.io 80 25m my-app-canary test.192.168.2.20.xip.io 80 1s nginx-test nginx.192.168.2.20.xip.io 80 3h22m

在后台观察,70% to v1,30% to v2

$ while sleep 0.5; do curl "test.192.168.2.20.xip.io";echo; done
{"v2.2 hostname":"app-new-658dfc9c6b-lbmvr"}
{"v2.2 hostname":"app-new-658dfc9c6b-qhwtg"}
{"v1 hostname":"app-old-64fd44b699-4hvlb"}
{"v1 hostname":"app-old-64fd44b699-zb58f"}

$ while sleep 0.5; do curl "test.192.168.2.20.xip.io";echo; done {"v2.2 hostname":"app-new-658dfc9c6b-lbmvr"} {"v2.2 hostname":"app-new-658dfc9c6b-qhwtg"} {"v1 hostname":"app-old-64fd44b699-4hvlb"} {"v1 hostname":"app-old-64fd44b699-zb58f"}

如果一切正常则可以正式发布

# delete the canary ingress
kubectl delete -f app-v2-canary.yaml
# set 100% traffic to v2
kubectl apply -f app-v2.yaml

# delete the canary ingress kubectl delete -f app-v2-canary.yaml # set 100% traffic to v2 kubectl apply -f app-v2.yaml

检查 ingress

$ kubectl get ingresses.extensions    
NAME          HOSTS                       ADDRESS   PORTS   AGE
app-ingress   www.example.com                       80      109m
my-app        test.192.168.2.20.xip.io              80      25m
nginx-test    nginx.192.168.2.20.xip.io             80      3h23m

$ while sleep 0.5; do curl "test.192.168.2.20.xip.io";echo; done
{"v2.2 hostname":"app-new-658dfc9c6b-lbmvr"}
{"v2.2 hostname":"app-new-658dfc9c6b-qhwtg"}

$ kubectl get ingresses.extensions NAME HOSTS ADDRESS PORTS AGE app-ingress www.example.com 80 109m my-app test.192.168.2.20.xip.io 80 25m nginx-test nginx.192.168.2.20.xip.io 80 3h23m $ while sleep 0.5; do curl "test.192.168.2.20.xip.io";echo; done {"v2.2 hostname":"app-new-658dfc9c6b-lbmvr"} {"v2.2 hostname":"app-new-658dfc9c6b-qhwtg"}
 

看完上述内容,你们掌握k8s中基于 nginx-ingress 的灰度发布是什么的方法了吗?如果还想学到更多技能或想了解更多相关内容,欢迎关注创新互联行业资讯频道,感谢各位的阅读!


名称栏目:k8s中基于nginx-ingress的灰度发布是什么
路径分享:http://pwwzsj.com/article/jhegci.html