sonatype\nexus3在Kubernetes中运行并允许使用Docker存储库的最佳设置是什么?
目前,我有一个基本设置:
sonatype\nexus3如何解决不允许使用多个端口的入口限制?
我认为这可以通过使用 nginx 入口来完成。通过为您的入口使用路径或子域。例如:
服务
apiVersion: v1
kind: Service
metadata:
labels:
app: nexus
name: nexus
namespace: default
selfLink: /api/v1/namespaces/default/services/nexus
spec:
ports:
- name: http
port: 80
targetPort: 8081
- name: docker
port: 5000
targetPort: 5000
selector:
app: nexus
type: ClusterIP
Run Code Online (Sandbox Code Playgroud)
入口
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: nexus
namespace: default
annotations:
kubernetes.io/ingress.class: "nginx"
kubernetes.io/tls-acme: "true"
spec:
tls:
- hosts:
- nexus.example.com
- docker.example.com
secretName: nexus-tls
rules:
- host: nexus.example.com
http:
paths:
- path: /
backend:
serviceName: nexus
servicePort: 80
- host: docker.example.com
http:
paths:
- path: /
backend:
serviceName: nexus
servicePort: 5000
Run Code Online (Sandbox Code Playgroud)
这里https://nexus.example.com将为您提供 Nexus UI 和所有通过正常 HTTP 端口工作的注册表功能。 https://docker.example.com:5000将公开您的 docker repo。虽然这需要您使用两个不同的主机名,但它更明确一点,并且不依赖客户端正确设置用户代理。这也恰好是 Nexus Helm 图表使用的策略,如下所示:
https://github.com/kubernetes/charts/tree/master/stable/sonatype-nexus
Nexus需要通过SSL进行服务,否则docker无法连接到它。这可以通过k8s入口+ kube-lego获得Let's Encrypt证书来实现。任何其他真实证书也将起作用。但是,为了通过一个入口(因此,一个端口)同时为联系UI和docker注册表提供服务,需要在入口后面提供反向代理以检测docker用户代理并将请求转发到注册表。
--(IF user agent docker) --> [nexus service]nexus:5000 --> docker registry
|
[nexus ingress]nexus.example.com:80/ --> [proxy service]internal-proxy:80 -->|
|
--(ELSE ) --> [nexus service]nexus:80 --> nexus UI
Run Code Online (Sandbox Code Playgroud)
nexus-deployment.yaml 这利用了azureFile卷,但是您可以使用任何卷。同样,出于明显的原因,也未显示该秘密。
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nexus
namespace: default
spec:
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
app: nexus
spec:
containers:
- name: nexus
image: sonatype/nexus3:3.3.1
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8081
- containerPort: 5000
volumeMounts:
- name: nexus-data
mountPath: /nexus-data
resources:
requests:
cpu: 440m
memory: 3.3Gi
limits:
cpu: 440m
memory: 3.3Gi
volumes:
- name: nexus-data
azureFile:
secretName: azure-file-storage-secret
shareName: nexus-data
Run Code Online (Sandbox Code Playgroud)
添加运行状况和就绪状态探测器始终是一个好主意,以便kubernetes可以检测应用程序何时关闭。击打index.html页面并非总是能很好地工作,因此我使用的是REST API。这需要为具有nx-script-*-browse权限的用户添加Authorization标头。显然,您必须首先在没有探针的情况下启动系统来设置用户,然后再更新您的部署。
readinessProbe:
httpGet:
path: /service/siesta/rest/v1/script
port: 8081
httpHeaders:
- name: Authorization
# The authorization token is simply the base64 encoding of the `healthprobe` user's credentials:
# $ echo -n user:password | base64
value: Basic dXNlcjpwYXNzd29yZA==
initialDelaySeconds: 900
timeoutSeconds: 60
livenessProbe:
httpGet:
path: /service/siesta/rest/v1/script
port: 8081
httpHeaders:
- name: Authorization
value: Basic dXNlcjpwYXNzd29yZA==
initialDelaySeconds: 900
timeoutSeconds: 60
Run Code Online (Sandbox Code Playgroud)
因为联系有时可能需要很长时间才能启动,所以我使用了非常慷慨的初始延迟和超时。
nexus-service.yaml公开UI的端口80和注册表的端口5000。这必须对应于通过UI为注册表配置的端口。
apiVersion: v1
kind: Service
metadata:
labels:
app: nexus
name: nexus
namespace: default
selfLink: /api/v1/namespaces/default/services/nexus
spec:
ports:
- name: http
port: 80
targetPort: 8081
- name: docker
port: 5000
targetPort: 5000
selector:
app: nexus
type: ClusterIP
Run Code Online (Sandbox Code Playgroud)
proxy-configmap.yaml将nginx.conf添加为ConfigMap数据卷。这包括用于检测docker用户代理的规则。这依赖kubernetes DNS nexus作为上游访问服务。
apiVersion: v1
data:
nginx.conf: |
worker_processes auto;
events {
worker_connections 1024;
}
http {
error_log /var/log/nginx/error.log warn;
access_log /dev/null;
proxy_intercept_errors off;
proxy_send_timeout 120;
proxy_read_timeout 300;
upstream nexus {
server nexus:80;
}
upstream registry {
server nexus:5000;
}
server {
listen 80;
server_name nexus.example.com;
keepalive_timeout 5 5;
proxy_buffering off;
# allow large uploads
client_max_body_size 1G;
location / {
# redirect to docker registry
if ($http_user_agent ~ docker ) {
proxy_pass http://registry;
}
proxy_pass http://nexus;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto "https";
}
}
}
kind: ConfigMap
metadata:
creationTimestamp: null
name: internal-proxy-conf
namespace: default
selfLink: /api/v1/namespaces/default/configmaps/internal-proxy-conf
Run Code Online (Sandbox Code Playgroud)
proxy-deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: internal-proxy
namespace: default
spec:
replicas: 1
template:
metadata:
labels:
proxy: internal
spec:
containers:
- name: nginx
image: nginx:1.11-alpine
imagePullPolicy: IfNotPresent
lifecycle:
preStop:
exec:
command: ["/usr/sbin/nginx","-s","quit"]
volumeMounts:
- name: internal-proxy-conf
mountPath: /etc/nginx/
env:
# This is a workaround to easily force a restart by incrementing the value (numbers must be quoted)
# NGINX needs to be restarted for configuration changes, especially DNS changes, to be detected
- name: RESTART_
value: "0"
volumes:
- name: internal-proxy-conf
configMap:
name: internal-proxy-conf
items:
- key: nginx.conf
path: nginx.conf
Run Code Online (Sandbox Code Playgroud)
proxy-service.yaml代理的类型是故意的,ClusterIP因为入口会将流量转发给它。在此示例中未使用端口443。
kind: Service
apiVersion: v1
metadata:
name: internal-proxy
namespace: default
spec:
selector:
proxy: internal
ports:
- name: http
port: 80
targetPort: 80
- name: https
port: 443
targetPort: 443
type: ClusterIP
Run Code Online (Sandbox Code Playgroud)
nexus-ingress.yaml此步骤假定您具有nginx入口控制器。如果您拥有证书,则不需要入口,而可以公开代理服务,但不会获得kube-lego的自动化好处。
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: nexus
namespace: default
annotations:
kubernetes.io/ingress.class: "nginx"
kubernetes.io/tls-acme: "true"
spec:
tls:
- hosts:
- nexus.example.com
secretName: nexus-tls
rules:
- host: nexus.example.com
http:
paths:
- path: /
backend:
serviceName: internal-proxy
servicePort: 80
Run Code Online (Sandbox Code Playgroud)
这假设您的 nginx 入口正在运行正确的 TLS 配置,并且您的集群可以处理持久卷声明:
安装 Nexus
使用 helm 在集群中安装 Nexus:
helm install stable/sonatype-nexus --name registry --namespace foo
Run Code Online (Sandbox Code Playgroud)
注意:您可以使用以下命令撤销安装:
helm del --purge registry
Run Code Online (Sandbox Code Playgroud)
调整 Nexus 部署
使用 helm 安装 Nexus 后,您将找到Nexus 的部署。添加containerPort: 5000到它,就在已经存在的containerPort下方。
调整 Nexus 服务
您还需要将端口添加5000到 Nexus 服务。将其放在默认端口下方:
- port: 5000
targetPort: 5000
protocol: TCP
name: docker
Run Code Online (Sandbox Code Playgroud)
入口配置示例:
此配置指向https://registry.example.comport 处的 Nexus UI 8081,并且指向https://docker.exmaple.comport 处的 docker 服务5000。
注意:在我的例子中,端口8081是 Nexus 部署中提供的默认端口,您在上面的步骤中对其进行了编辑。如果您的安装使用其他端口,请调整它。
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: example-com-ingress
namespace: foo
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
# Provide the docker backend that is used for docker login.
- host: docker.example.com
http:
paths:
- path: /
backend:
serviceName: registry-sonatype-nexus
servicePort: 5000
# Provide the nexus backend that is used for the UI etc.
- host: registry.exmaple.com
http:
paths:
- path: /
backend:
serviceName: registry-sonatype-nexus
servicePort: 8081
tls:
- secretName: example-com-tls
hosts:
- registry.example.com
- docker.example.com
Run Code Online (Sandbox Code Playgroud)
配置 Nexus
您现在应该能够在 处打开 Nexus UI https://registry.example.com。使用默认凭据登录。用户:admin密码:admin123.
创建一个 docker 主机存储库并设置HTTP为Repository Connector并5000禁用Force Basic Authentication。
登录、标记并推送图像
您现在应该能够使用 Nexus 登录凭据将 docker 客户端登录到注册表:
docker login docker.example.com
Run Code Online (Sandbox Code Playgroud)
使用此模式来标记和推送图像:
docker tag <image>:<tag> <nexus-hostname>/<namespace>/<image>:<tag>
docker push <nexus-hostname>/<namespace>/<image>:<tag>
Run Code Online (Sandbox Code Playgroud)
例如:
docker tag myapp:1.0.0 docker.example.com/foo/myapp:1.0.0
docker push docker.example.com/foo/myapp:1.0.0
Run Code Online (Sandbox Code Playgroud)
| 归档时间: |
|
| 查看次数: |
6619 次 |
| 最近记录: |