alt*_*-f4 3 amazon-web-services kubernetes aws-fargate amazon-eks
我的目标是能够使用 Fargate 在 AWS EKS 上进行部署。我已经成功地使部署与node_group
. 然而,当我转而使用 Fargate 时,Pod 似乎都陷入了挂起状态。
我正在使用 Terraform 进行配置(不一定是在寻找 Terraform 答案)。这就是我创建 EKS 集群的方式:
module "eks_cluster" {
source = "terraform-aws-modules/eks/aws"
version = "13.2.1"
cluster_name = "${var.project_name}-${var.env_name}"
cluster_version = var.cluster_version
vpc_id = var.vpc_id
cluster_enabled_log_types = ["api", "audit", "authenticator", "controllerManager", "scheduler"]
enable_irsa = true
subnets = concat(var.private_subnet_ids, var.public_subnet_ids)
create_fargate_pod_execution_role = false
node_groups = {
my_nodes = {
desired_capacity = 1
max_capacity = 2
min_capacity = 1
instance_type = var.nodes_instance_type
subnets = var.private_subnet_ids
}
}
}
Run Code Online (Sandbox Code Playgroud)
这就是我配置 Fargate 配置文件的方式:
resource "aws_eks_fargate_profile" "airflow" {
cluster_name = module.eks_cluster.cluster_id
fargate_profile_name = "${var.project_name}-fargate-${var.env_name}"
pod_execution_role_arn = aws_iam_role.fargate_iam_role.arn
subnet_ids = var.private_subnet_ids
selector {
namespace = "airflow"
}
}
Run Code Online (Sandbox Code Playgroud)
这就是我创建并附加所需策略的方式:
resource "aws_iam_role" "fargate_iam_role" {
name = "${var.project_name}-fargate-${var.env_name}"
force_detach_policies = true
assume_role_policy = jsonencode({
Statement = [{
Action = "sts:AssumeRole"
Effect = "Allow"
Principal = {
Service = ["eks-fargate-pods.amazonaws.com", "eks.amazonaws.com"]
}
}]
Version = "2012-10-17"
})
}
# Attach IAM Policy for Fargate
resource "aws_iam_role_policy_attachment" "fargate_pod_execution" {
role = aws_iam_role.fargate_iam_role.name
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSFargatePodExecutionRolePolicy"
}
Run Code Online (Sandbox Code Playgroud)
我尝试将 Pod(我使用 Helm 图表)部署在Fargate Profile
存在的同一命名空间中。当我运行时kubectl get pods -n airflow
,我看到所有 pod 都处于挂起状态,如下所示:
NAME READY STATUS RESTARTS AGE
airflow-flower-79b5948677-vww5d 0/1 Pending 0 40s
airflow-redis-master-0 0/1 Pending 0 40s
airflow-scheduler-6b6bd4b6f6-j9qzg 0/2 Pending 0 41s
airflow-web-567b55fbbf-z8dsg 0/2 Pending 0 41s
airflow-worker-0 0/2 Pending 0 40s
airflow-worker-1 0/2 Pending 0 40s
Run Code Online (Sandbox Code Playgroud)
然后我查看事件kubectl get events -n airflow
,我得到:
LAST SEEN TYPE REASON OBJECT MESSAGE
2m15s Normal LoggingEnabled pod/airflow-flower-79b5948677-vww5d Successfully enabled logging for pod
2m16s Normal SuccessfulCreate replicaset/airflow-flower-79b5948677 Created pod: airflow-flower-79b5948677-vww5d
2m17s Normal ScalingReplicaSet deployment/airflow-flower Scaled up replica set airflow-flower-79b5948677 to 1
2m15s Normal LoggingEnabled pod/airflow-redis-master-0 Successfully enabled logging for pod
2m16s Normal SuccessfulCreate statefulset/airflow-redis-master create Pod airflow-redis-master-0 in StatefulSet airflow-redis-master successful
2m15s Normal LoggingEnabled pod/airflow-scheduler-6b6bd4b6f6-j9qzg Successfully enabled logging for pod
2m16s Normal SuccessfulCreate replicaset/airflow-scheduler-6b6bd4b6f6 Created pod: airflow-scheduler-6b6bd4b6f6-j9qzg
2m17s Normal NoPods poddisruptionbudget/airflow-scheduler No matching pods found
2m17s Normal ScalingReplicaSet deployment/airflow-scheduler Scaled up replica set airflow-scheduler-6b6bd4b6f6 to 1
2m15s Normal LoggingEnabled pod/airflow-web-567b55fbbf-z8dsg Successfully enabled logging for pod
2m16s Normal SuccessfulCreate replicaset/airflow-web-567b55fbbf Created pod: airflow-web-567b55fbbf-z8dsg
2m17s Normal ScalingReplicaSet deployment/airflow-web Scaled up replica set airflow-web-567b55fbbf to 1
2m15s Normal LoggingEnabled pod/airflow-worker-0 Successfully enabled logging for pod
2m15s Normal LoggingEnabled pod/airflow-worker-1 Successfully enabled logging for pod
2m16s Normal SuccessfulCreate statefulset/airflow-worker create Pod airflow-worker-0 in StatefulSet airflow-worker successful
2m16s Normal SuccessfulCreate statefulset/airflow-worker create Pod airflow-worker-1 in StatefulSet airflow-worker successful
Run Code Online (Sandbox Code Playgroud)
然后我尝试描述其中一个 pod(通过kubectl describe pod
),我得到:
Name: airflow-redis-master-0
Namespace: airflow
Priority: 2000001000
Priority Class Name: system-node-critical
Node: <none>
Labels: app=redis
chart=redis-10.5.7
controller-revision-hash=airflow-redis-master-588d57785d
eks.amazonaws.com/fargate-profile=airflow-fargate-airflow-dev
release=airflow
role=master
statefulset.kubernetes.io/pod-name=airflow-redis-master-0
Annotations: CapacityProvisioned: 0.25vCPU 0.5GB
Logging: LoggingEnabled
checksum/configmap: 2b82c78fd9186045e6e2b44cfbb38460310697cf2f2f175c9d8618dd4d42e1ca
checksum/health: a5073935c8eb985cf8f3128ba7abbc4121cef628a9a1b0924c95cf97d33323bf
checksum/secret: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
cluster-autoscaler.kubernetes.io/safe-to-evict: true
kubernetes.io/psp: eks.privileged
Status: Pending
IP:
IPs: <none>
Controlled By: StatefulSet/airflow-redis-master
NominatedNodeName: 6f344dfd11-000a9c54e4e240a2a8b3dfceb5f8227e
Containers:
airflow-redis:
Image: docker.io/bitnami/redis:5.0.7-debian-10-r32
Port: 6379/TCP
Host Port: 0/TCP
Command:
/bin/bash
-c
if [[ -n $REDIS_PASSWORD_FILE ]]; then
password_aux=`cat ${REDIS_PASSWORD_FILE}`
export REDIS_PASSWORD=$password_aux
fi
if [[ ! -f /opt/bitnami/redis/etc/master.conf ]];then
cp /opt/bitnami/redis/mounted-etc/master.conf /opt/bitnami/redis/etc/master.conf
fi
if [[ ! -f /opt/bitnami/redis/etc/redis.conf ]];then
cp /opt/bitnami/redis/mounted-etc/redis.conf /opt/bitnami/redis/etc/redis.conf
fi
ARGS=("--port" "${REDIS_PORT}")
ARGS+=("--requirepass" "${REDIS_PASSWORD}")
ARGS+=("--masterauth" "${REDIS_PASSWORD}")
ARGS+=("--include" "/opt/bitnami/redis/etc/redis.conf")
ARGS+=("--include" "/opt/bitnami/redis/etc/master.conf")
/run.sh ${ARGS[@]}
Liveness: exec [sh -c /health/ping_liveness_local.sh 5] delay=5s timeout=5s period=5s #success=1 #failure=5
Readiness: exec [sh -c /health/ping_readiness_local.sh 5] delay=5s timeout=1s period=5s #success=1 #failure=5
Environment:
REDIS_REPLICATION_MODE: master
REDIS_PASSWORD: <set to the key 'redis-password' in secret 'my-creds'> Optional: false
REDIS_PORT: 6379
Mounts:
/data from redis-data (rw)
/health from health (rw)
/opt/bitnami/redis/etc/ from redis-tmp-conf (rw)
/opt/bitnami/redis/mounted-etc from config (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-dmwvn (ro)
Volumes:
health:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: airflow-redis-health
Optional: false
config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: airflow-redis
Optional: false
redis-data:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
redis-tmp-conf:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
default-token-dmwvn:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-dmwvn
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal LoggingEnabled 3m12s fargate-scheduler Successfully enabled logging for pod
Warning FailedScheduling 12s fargate-scheduler Pod provisioning timed out (will retry) for pod: airflow/airflow-redis-master-0
Run Code Online (Sandbox Code Playgroud)
kubernetes_tags = map(
"kubernetes.io/role/${var.type == "Public" ? "elb" : "internal-elb"}", 1,
"kubernetes.io/cluster/${var.kubernetes_cluster_name}", "shared"
)
Run Code Online (Sandbox Code Playgroud)
single_nat_gateway = true # needed for fargate (https://docs.aws.amazon.com/eks/latest/userguide/eks-ug.pdf#page=135&zoom=100,96,764)
enable_nat_gateway = true # needed for fargate (https://docs.aws.amazon.com/eks/latest/userguide/eks-ug.pdf#page=135&zoom=100,96,764)
enable_vpn_gateway = false
enable_dns_hostnames = true # needed for fargate (https://docs.aws.amazon.com/eks/latest/userguide/eks-ug.pdf#page=135&zoom=100,96,764)
enable_dns_support = true # needed for fargate (https://docs.aws.amazon.com/eks/latest/userguide/eks-ug.pdf#page=135&zoom=100,96,764)
Run Code Online (Sandbox Code Playgroud)
但是,我已经获得了一个已轻松创建的 VPC,但我不确定如何检查这些设置是否已打开/关闭。
我需要采取哪些步骤来调试此问题?
出于测试目的,我认为您需要使用 NAT 网关启用从 vpc 私有子网到外部世界的连接。因此,您可以在公共中创建 NAT 网关,并向私有子网添加其关联路由表中的附加条目,如下所示:
0.0.0.0/0 nat-xxxxxxxx
如果这有效,并且您希望通过更安全的防火墙实例来限制出站,我认为您需要联系防火墙提供商支持以询问如何将远程出站流量列入白名单。
归档时间: |
|
查看次数: |
2433 次 |
最近记录: |