soc*_*pet 1 azure kubernetes azure-devops azure-pipelines azure-aks
我已经将初始的图像拆分出来azure-pipelines.yml以使用模板、迭代等...无论出于何种原因,尽管使用了latest标签和/或imagePullPolicy: Always.
另外,我基本上有两个PR管道Release:
PR当提交 PR 请求合并到 时触发production。它会自动触发此管道来运行单元测试、构建 Docker 映像、进行集成测试等,然后如果一切都通过,则将映像推送到 ACR。PR管道通过并且 PR 获得批准时,它会被合并到其中,production然后触发Release管道。这是我的部署清单之一的示例k8s(管道说明unchanged何时应用这些清单):
apiVersion: apps/v1
kind: Deployment
metadata:
name: admin-v2-deployment-prod
namespace: prod
spec:
replicas: 3
selector:
matchLabels:
component: admin-v2
template:
metadata:
labels:
component: admin-v2
spec:
containers:
- name: admin-v2
imagePullPolicy: Always
image: appacr.azurecr.io/app-admin-v2:latest
ports:
- containerPort: 4001
---
apiVersion: v1
kind: Service
metadata:
name: admin-v2-cluster-ip-service-prod
namespace: prod
spec:
type: ClusterIP
selector:
component: admin-v2
ports:
- port: 4001
targetPort: 4001
Run Code Online (Sandbox Code Playgroud)
以下是.yamls我已经拆分出来的各种相关管道:
# templates/variables.yaml
variables:
dockerRegistryServiceConnection: '<GUID>'
imageRepository: 'app'
containerRegistry: 'appacr.azurecr.io'
dockerfilePath: '$(Build.SourcesDirectory)'
tag: '$(Build.BuildId)'
imagePullSecret: 'appacr1c5a-auth'
vmImageName: 'ubuntu-latest'
Run Code Online (Sandbox Code Playgroud)
# pr.yaml
trigger: none
resources:
- repo: self
pool:
vmIMage: $(vmImageName)
variables:
- template: templates/variables.yaml
stages:
- template: templates/changed.yaml
- template: templates/unitTests.yaml
- template: templates/build.yaml
parameters:
services:
- api
- admin
- admin-v2
- client
- template: templates/integrationTests.yaml
Run Code Online (Sandbox Code Playgroud)
# templates/build.yaml
parameters:
- name: services
type: object
default: []
stages:
- stage: Build
displayName: Build stage
jobs:
- job: Build
displayName: Build
steps:
- ${{ each service in parameters.services }}:
- task: Docker@2
displayName: Build and push an ${{ service }} image to container registry
inputs:
command: buildAndPush
repository: $(imageRepository)-${{ service }}
dockerfile: $(dockerfilePath)/${{ service }}/Dockerfile
containerRegistry: $(dockerRegistryServiceConnection)
tags: |
$(tag)
Run Code Online (Sandbox Code Playgroud)
# release.yaml
trigger:
branches:
include:
- production
resources:
- repo: self
variables:
- template: templates/variables.yaml
stages:
- template: templates/publish.yaml
- template: templates/deploy.yaml
parameters:
services:
- api
- admin
- admin-v2
- client
Run Code Online (Sandbox Code Playgroud)
# templates/deploy.yaml
parameters:
- name: services
type: object
default: []
stages:
- stage: Deploy
displayName: Deploy stage
dependsOn: Publish
jobs:
- deployment: Deploy
displayName: Deploy
pool:
vmImage: $(vmImageName)
environment: 'App Production AKS'
strategy:
runOnce:
deploy:
steps:
- task: KubernetesManifest@0
displayName: Create imagePullSecret
inputs:
action: createSecret
secretName: $(imagePullSecret)
kubernetesServiceConnection: 'App Production AKS'
dockerRegistryEndpoint: $(dockerRegistryServiceConnection)
- ${{ each service in parameters.services }}:
- task: KubernetesManifest@0
displayName: Deploy to ${{ service }} Kubernetes cluster
inputs:
action: deploy
kubernetesServiceConnection: 'App Production AKS'
manifests: |
$(Pipeline.Workspace)/k8s/aks/${{ service }}.yaml
imagePullSecrets: |
$(imagePullSecret)
containers: |
$(containerRegistry)/$(imageRepository)-${{ service }}:$(tag)
Run Code Online (Sandbox Code Playgroud)
PR都Release通过...对于我在这里做错了什么有什么建议吗?
无论出于何种原因,尽管使用了最新标签,但新映像并未部署
Kubernetes 如何知道有新镜像?Kubernetes 配置是声明性的。Kubernetes 已经在运行曾经的“最新”镜像。
这是我的 k8s 部署清单之一的示例(应用这些清单时管道表示未更改)
是的,它没有改变,因为声明性分散状态没有改变。部署清单说明了应该部署什么,它不是命令。
每当您构建图像时,请始终为其指定一个唯一的名称。每当你想要部署某些东西时,总是为应该运行的东西设置一个唯一的名称- 然后 Kubernetes 将使用滚动部署以优雅的零停机方式管理它 - 除非你将其配置为不同的行为。
| 归档时间: |
|
| 查看次数: |
1526 次 |
| 最近记录: |