我的 traefik 服务在
kube-system
命名空间中陷入挂起状态。
traefik LoadBalancer 10.43.42.172 <pending> 9100:32314/TCP,80:30799/TCP,443:30543/TCP 59d
与 svclb pod 相同
svclb-traefik-bf918f33-gddm2 0/3 Pending 0 21m
以下是该服务的描述:
Name: traefik
Namespace: kube-system
Labels: app.kubernetes.io/instance=traefik-kube-system
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=traefik
helm.sh/chart=traefik-33.0.0
Annotations: meta.helm.sh/release-name: traefik
meta.helm.sh/release-namespace: kube-system
Selector: app.kubernetes.io/instance=traefik-kube-system,app.kubernetes.io/name=traefik
Type: LoadBalancer
IP Family Policy: PreferDualStack
IP Families: IPv4
IP: 10.43.42.172
IPs: 10.43.42.172
IP: 192.168.1.45
Port: metrics 9100/TCP
TargetPort: metrics/TCP
NodePort: metrics 32314/TCP
Endpoints: 10.42.2.107:9102
Port: web 80/TCP
TargetPort: web/TCP
NodePort: web 30799/TCP
Endpoints: 10.42.2.107:8000
Port: websecure 443/TCP
TargetPort: websecure/TCP
NodePort: websecure 30543/TCP
Endpoints: 10.42.2.107:8443
Session Affinity: None
External Traffic Policy: Cluster
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal UpdatedLoadBalancer 58d service-lb-controller Updated LoadBalancer with new IPs: [192.168.1.23 192.168.1.43] -> [192.168.1.23 192.168.1.43]
Normal EnsuringLoadBalancer 58d service-controller Ensuring load balancer
Normal AppliedDaemonSet 58d service-lb-controller Applied LoadBalancer DaemonSet kube-system/svclb-traefik-bf918f33
Normal UpdatedLoadBalancer 58d service-lb-controller Updated LoadBalancer with new IPs: [192.168.1.23 192.168.1.43] -> [192.168.1.23 192.168.1.43]
Normal EnsuringLoadBalancer 58d service-controller Ensuring load balancer
Normal AppliedDaemonSet 58d service-lb-controller Applied LoadBalancer DaemonSet kube-system/svclb-traefik-bf918f33
Normal UpdatedLoadBalancer 58d service-lb-controller Updated LoadBalancer with new IPs: [192.168.1.23 192.168.1.43] -> [192.168.1.23]
Normal UpdatedLoadBalancer 58d service-lb-controller Updated LoadBalancer with new IPs: [192.168.1.23] -> [192.168.1.23 192.168.1.43]
Normal AppliedDaemonSet 58d service-lb-controller Applied LoadBalancer DaemonSet kube-system/svclb-traefik-bf918f33
Normal EnsuringLoadBalancer 58d service-controller Ensuring load balancer
Normal UpdatedLoadBalancer 58d service-lb-controller Updated LoadBalancer with new IPs: [192.168.1.23 192.168.1.43] -> [192.168.1.23 192.168.1.43]
Normal UpdatedLoadBalancer 58d service-lb-controller Updated LoadBalancer with new IPs: [192.168.1.23 192.168.1.43] -> [192.168.1.23]
Normal UpdatedLoadBalancer 58d service-lb-controller Updated LoadBalancer with new IPs: [192.168.1.23] -> [192.168.1.23 192.168.1.43]
Normal EnsuringLoadBalancer 58d service-controller Ensuring load balancer
Normal AppliedDaemonSet 58d service-lb-controller Applied LoadBalancer DaemonSet kube-system/svclb-traefik-bf918f33
Normal UpdatedLoadBalancer 58d service-lb-controller Updated LoadBalancer with new IPs: [192.168.1.23 192.168.1.43] -> [192.168.1.23]
Normal UpdatedLoadBalancer 58d service-lb-controller Updated LoadBalancer with new IPs: [192.168.1.23] -> [192.168.1.23 192.168.1.43]
Normal UpdatedLoadBalancer 58d (x14 over 58d) service-lb-controller Updated LoadBalancer with new IPs: [192.168.1.23 192.168.1.43] -> [192.168.1.23 192.168.1.43]
Normal EnsuringLoadBalancer 31m (x2 over 24h) service-controller Ensuring load balancer
Normal AppliedDaemonSet 31m (x2 over 24h) service-lb-controller Applied LoadBalancer DaemonSet kube-system/svclb-traefik-bf918f33
Normal LoadbalancerIP 31m service-controller -> 192.168.1.45
和吊舱
Name: svclb-traefik-bf918f33-gddm2
Namespace: kube-system
Priority: 2000001000
Priority Class Name: system-node-critical
Service Account: svclb
Node: <none>
Labels: app=svclb-traefik-bf918f33
controller-revision-hash=565796b9cd
pod-template-generation=2
svccontroller.k3s.cattle.io/svcname=traefik
svccontroller.k3s.cattle.io/svcnamespace=kube-system
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Controlled By: DaemonSet/svclb-traefik-bf918f33
Containers:
lb-tcp-9100:
Image: rancher/klipper-lb:v0.4.9
Port: 9100/TCP
Host Port: 9100/TCP
Environment:
SRC_PORT: 9100
SRC_RANGES: 0.0.0.0/0
DEST_PROTO: TCP
DEST_PORT: 9100
DEST_IPS: 10.43.42.172
Mounts: <none>
lb-tcp-80:
Image: rancher/klipper-lb:v0.4.9
Port: 80/TCP
Host Port: 80/TCP
Environment:
SRC_PORT: 80
SRC_RANGES: 0.0.0.0/0
DEST_PROTO: TCP
DEST_PORT: 80
DEST_IPS: 10.43.42.172
Mounts: <none>
lb-tcp-443:
Image: rancher/klipper-lb:v0.4.9
Port: 443/TCP
Host Port: 443/TCP
Environment:
SRC_PORT: 443
SRC_RANGES: 0.0.0.0/0
DEST_PROTO: TCP
DEST_PORT: 443
DEST_IPS: 10.43.42.172
Mounts: <none>
Conditions:
Type Status
PodScheduled False
Volumes: <none>
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: CriticalAddonsOnly op=Exists
node-role.kubernetes.io/control-plane:NoSchedule op=Exists
node-role.kubernetes.io/master:NoSchedule op=Exists
node.kubernetes.io/disk-pressure:NoSchedule op=Exists
node.kubernetes.io/memory-pressure:NoSchedule op=Exists
node.kubernetes.io/not-ready:NoExecute op=Exists
node.kubernetes.io/pid-pressure:NoSchedule op=Exists
node.kubernetes.io/unreachable:NoExecute op=Exists
node.kubernetes.io/unschedulable:NoSchedule op=Exists
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 24m default-scheduler 0/1 nodes are available: 1 node(s) didn't have free ports for the requested pod ports. preemption: 0/1 nodes are available: 1 node(s) didn't have free ports for the requested pod ports.
Warning FailedScheduling 19m (x2 over 22m) default-scheduler 0/1 nodes are available: 1 node(s) didn't have free ports for the requested pod ports. preemption: 0/1 nodes are available: 1 node(s) didn't have free ports for the requested pod ports.
我了解到存在端口冲突。但该服务需要哪些端口? k3s 主机上的 9100、80 和 443? 我将不胜感激任何提示。
请参考这些
K3S 文档:
按照第二个链接在 K3s 上设置 Klipper LB/Service LB。
如果您需要有关如何进一步操作的更多信息,请分享这些详细信息