我有一个安装了 istio 的 aks 集群及其外部入口网关。当我创建入口资源时,无法通过负载均衡器 IP 访问它。请求在一段时间后超时。如果我创建虚拟服务和网关资源,一切都会正常。有什么想法吗?
curl -vvvv -H "Host:demo-app-deployment.localdev.me" $INGRESS_HOST:$INGRESS_PORT
入口和应用程序部署:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: demo-nginx
name: demo-nginx
spec:
replicas: 1
selector:
matchLabels:
app: demo-nginx
strategy: {}
template:
metadata:
labels:
app: demo-nginx
spec:
containers:
- image: nginx
name: nginx
ports:
- containerPort: 80
name: web
resources:
limits:
cpu: 1000m
memory: 200Mi
requests:
cpu: 10m
memory: 200Mi
status: {}
---
apiVersion: v1
kind: Service
metadata:
labels:
app: demo-nginx
name: demo-nginx
spec:
ports:
- port: 80
name: "web"
protocol: TCP
targetPort: "web"
selector:
app: demo-nginx
status:
loadBalancer: {}
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: demo-nginx
annotations:
kubernetes.io/ingress.class: istio
spec:
ingressClassName: istio
rules:
- host: demo-app-deployment.localdev.me
http:
paths:
- backend:
service:
name: demo-nginx
port:
number: 80
path: /
pathType: Prefix
ingressgateway pod 日志:
kubectl logs aks-istio-ingressgateway-external-asm-1-21-848779b5b7-2jjdq -n aks-istio-ingress
2024-04-28T14:21:39.798546Z info FLAG: --concurrency="0"
2024-04-28T14:21:39.798569Z info FLAG: --domain="aks-istio-ingress.svc.cluster.local"
2024-04-28T14:21:39.798574Z info FLAG: --help="false"
2024-04-28T14:21:39.798576Z info FLAG: --log_as_json="false"
2024-04-28T14:21:39.798579Z info FLAG: --log_caller=""
2024-04-28T14:21:39.798581Z info FLAG: --log_output_level="default:info"
2024-04-28T14:21:39.798583Z info FLAG: --log_rotate=""
2024-04-28T14:21:39.798585Z info FLAG: --log_rotate_max_age="30"
2024-04-28T14:21:39.798588Z info FLAG: --log_rotate_max_backups="1000"
2024-04-28T14:21:39.798590Z info FLAG: --log_rotate_max_size="104857600"
2024-04-28T14:21:39.798592Z info FLAG: --log_stacktrace_level="default:none"
2024-04-28T14:21:39.798597Z info FLAG: --log_target="[stdout]"
2024-04-28T14:21:39.798599Z info FLAG: --meshConfig="./etc/istio/config/mesh"
2024-04-28T14:21:39.798613Z info FLAG: --outlierLogPath=""
2024-04-28T14:21:39.798615Z info FLAG: --profiling="true"
2024-04-28T14:21:39.798618Z info FLAG: --proxyComponentLogLevel="misc:error"
2024-04-28T14:21:39.798620Z info FLAG: --proxyLogLevel="warning"
2024-04-28T14:21:39.798623Z info FLAG: --serviceCluster="istio-proxy"
2024-04-28T14:21:39.798625Z info FLAG: --stsPort="0"
2024-04-28T14:21:39.798628Z info FLAG: --templateFile=""
2024-04-28T14:21:39.798630Z info FLAG: --tokenManagerPlugin="GoogleTokenExchange"
2024-04-28T14:21:39.798634Z info FLAG: --vklog="0"
2024-04-28T14:21:39.798637Z info Version 1.21-dev-ee6e9f6a314224fd9bcb808ca1d74d1dc66adba8-Clean
2024-04-28T14:21:39.798848Z warn failed running ulimit command:
2024-04-28T14:21:39.799033Z info Proxy role ips=[10.244.1.13] type=router id=aks-istio-ingressgateway-external-asm-1-21-848779b5b7-2jjdq.aks-istio-ingress domain=aks-istio-ingress.svc.cluster.local
2024-04-28T14:21:39.799097Z info Apply proxy config from env {"discoveryAddress":"istiod-asm-1-21.aks-istio-system.svc:15012","tracing":{"zipkin":{"address":"zipkin.aks-istio-system:9411"}},"gatewayTopology":{"numTrustedProxies":1},"image":{"imageType":"distroless"}}
2024-04-28T14:21:39.800779Z info cpu limit detected as 2, setting concurrency
2024-04-28T14:21:39.801054Z info Effective config: binaryPath: /usr/local/bin/envoy
concurrency: 2
configPath: ./etc/istio/proxy
controlPlaneAuthPolicy: MUTUAL_TLS
discoveryAddress: istiod-asm-1-21.aks-istio-system.svc:15012
drainDuration: 45s
gatewayTopology:
numTrustedProxies: 1
image:
imageType: distroless
proxyAdminPort: 15000
serviceCluster: istio-proxy
statNameLength: 189
statusPort: 15020
terminationDrainDuration: 5s
tracing:
zipkin:
address: zipkin.aks-istio-system:9411
2024-04-28T14:21:39.801079Z info JWT policy is third-party-jwt
2024-04-28T14:21:39.801086Z info using credential fetcher of JWT type in cluster.local trust domain
2024-04-28T14:21:39.801289Z info platform detected is Azure
2024-04-28T14:21:39.808311Z warn HTTP request unsuccessful with status: 400 Bad Request
2024-04-28T14:21:39.818317Z info Workload SDS socket not found. Starting Istio SDS Server
2024-04-28T14:21:39.818345Z info CA Endpoint istiod-asm-1-21.aks-istio-system.svc:15012, provider Citadel
2024-04-28T14:21:39.818369Z info Using CA istiod-asm-1-21.aks-istio-system.svc:15012 cert with certs: var/run/secrets/istio/root-cert.pem
2024-04-28T14:21:39.818890Z info Opening status port 15020
2024-04-28T14:21:39.826844Z info ads All caches have been synced up in 28.703074ms, marking server ready
2024-04-28T14:21:39.827082Z info xdsproxy Initializing with upstream address "istiod-asm-1-21.aks-istio-system.svc:15012" and cluster "Kubernetes"
2024-04-28T14:21:39.829173Z info Pilot SAN: [istiod-asm-1-21.aks-istio-system.svc]
2024-04-28T14:21:39.830220Z info sds Starting SDS grpc server
2024-04-28T14:21:39.831508Z info starting Http service at 127.0.0.1:15004
2024-04-28T14:21:39.831793Z info Starting proxy agent
2024-04-28T14:21:39.832055Z info Envoy command: [-c etc/istio/proxy/envoy-rev.json --drain-time-s 45 --drain-strategy immediate --local-address-ip-version v4 --file-flush-interval-msec 1000 --disable-hot-restart --allow-unknown-static-fields --log-format %Y-%m-%dT%T.%fZ %l envoy %n %g:%# %v thread=%t -l warning --component-log-level misc:error --concurrency 2]
2024-04-28T14:21:39.913967Z info xdsproxy connected to upstream XDS server[1]: istiod-asm-1-21.aks-istio-system.svc:15012
2024-04-28T14:21:39.933163Z info ads ADS: new connection for node:aks-istio-ingressgateway-external-asm-1-21-848779b5b7-2jjdq.aks-istio-ingress-1
2024-04-28T14:21:39.934344Z info ads ADS: new connection for node:aks-istio-ingressgateway-external-asm-1-21-848779b5b7-2jjdq.aks-istio-ingress-2
2024-04-28T14:21:39.987458Z info cache generated new workload certificate latency=158.869015ms ttl=23h59m59.012546407s
2024-04-28T14:21:39.987650Z info cache Root cert has changed, start rotating root cert
2024-04-28T14:21:39.987740Z info ads XDS: Incremental Pushing ConnectedEndpoints:2 Version:
2024-04-28T14:21:39.987861Z info cache returned workload trust anchor from cache ttl=23h59m59.012140303s
2024-04-28T14:21:39.988000Z info cache returned workload certificate from cache ttl=23h59m59.012001502s
2024-04-28T14:21:39.988384Z info ads SDS: PUSH request for node:aks-istio-ingressgateway-external-asm-1-21-848779b5b7-2jjdq.aks-istio-ingress resources:1 size:4.0kB resource:default
2024-04-28T14:21:39.988477Z info cache returned workload trust anchor from cache ttl=23h59m59.011525197s
2024-04-28T14:21:39.988781Z info ads SDS: PUSH request for node:aks-istio-ingressgateway-external-asm-1-21-848779b5b7-2jjdq.aks-istio-ingress resources:1 size:1.1kB resource:ROOTCA
2024-04-28T14:21:39.988932Z info cache returned workload trust anchor from cache ttl=23h59m59.011069593s
2024-04-28T14:21:40.839485Z info Readiness succeeded in 1.049696312s
2024-04-28T14:21:40.839824Z info Envoy proxy is ready
2024-04-28T14:49:45.370889Z info xdsproxy connected to upstream XDS server[2]: istiod-asm-1-21.aks-istio-system.svc:15012
2024-04-28T15:21:34.608757Z info xdsproxy connected to upstream XDS server[3]: istiod-asm-1-21.aks-istio-system.svc:15012
2024-04-28T15:49:47.486824Z info xdsproxy connected to upstream XDS server[4]: istiod-asm-1-21.aks-istio-system.svc:15012
2024-04-28T16:22:08.624015Z info xdsproxy connected to upstream XDS server[5]: istiod-asm-1-21.aks-istio-system.svc:15012
2024-04-28T16:49:38.978403Z info xdsproxy connected to upstream XDS server[6]: istiod-asm-1-21.aks-istio-system.svc:15012
2024-04-28T17:16:57.062446Z info xdsproxy connected to upstream XDS server[7]: istiod-asm-1-21.aks-istio-system.svc:15012
Istio 主要依靠自己的一组自定义资源(例如网关和虚拟服务)来管理入口流量。因此,Istio 可能无法正确处理某些 Ingress 配置,从而导致尝试通过负载均衡器 IP 访问应用程序时超时。 为了让带有 Istio 的 AKS 集群能够处理入口资源,请确保安装了 Istio,并且在集群中部署了入口网关。
kubectl get pods -n <istio-system-namespace>
kubectl get svc -n <istio-system-namespace>
您应该看到
istio-ingressgateway
服务,它通常属于负载均衡器类型。
在此示例中,部署了一个示例 Nginx 应用程序。
部署示例-
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-demo
namespace: yournamespace
spec:
replicas: 1
selector:
matchLabels:
app: nginx-demo
template:
metadata:
labels:
app: nginx-demo
spec:
containers:
- name: nginx
image: nginx:stable
ports:
- containerPort: 80
需要创建一个指向Nginx部署的服务。 服务示例
apiVersion: v1
kind: Service
metadata:
name: nginx-demo-service
namespace: yournamespace
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
name: http
selector:
app: nginx-demo
然后创建 Istio Gateway 和 VirtualService 以公开 Nginx 服务。 网关示例
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: nginx-gateway
namespace: yournamespace
spec:
selector:
istio: ingressgateway-external-asm-1-20 # This should match the label of your external gateway.
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
然后,创建 nginx-virtualservice。 虚拟服务示例
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: nginx-virtualservice
namespace: yournamespace
spec:
hosts:
- "*"
gateways:
- nginx-gateway
http:
- match:
- uri:
prefix: /
route:
- destination:
host: nginx-demo-service
port:
number: 80
应用它们
kubectl apply -f <filename.yaml>
要访问您的服务,您需要 Istio Ingress Gateway 的外部 IP。
kubectl get svc -n <youristionamespace>
您现在可以curl http://
demo-app-deployment.localdev.me
)正确解析为 Istio Ingress Gateway 的外部 IP。您可以暂时将其添加到您的/etc/hosts
文件
57.151.36.51 demo-app-deployment.localdev.me
并测试一下
curl http://demo-app-deployment.localdev.me
其他需要检查的事情- 检查服务端点以确保它们已准备好并且 Pod IP 匹配
kubectl get endpoints nginx-demo-service -n <yournamespace>
检查入口网关 Pod 的日志,看看是否有任何错误
kubectl logs <ingress-gateway-pod-name> -n <namespace>
。此外,确保 Ingress 资源与正确的 Istio 网关和 VirtualService 关联也很重要。
参考资料:
请检查此:https://medium.com/microsoftazure/cert-manager-and-istio-choosing-ingress-options-for-the-istio-based-service-mesh-add-on-for-aks- c633c97fa4f2 它可能会给你答案和可能的解决方案。
我在使用 azure istio add 时遇到了同样的问题。是的,我们需要入口来生成 LetEncrypt 证书。因为每次生成证书时都会创建一个入口来解决 LetsEncrypt 挑战。