Infinispan 在 Kubernetes 上扮演 Vert.x 集群管理器和缓存管理器的双重角色。缓存未共享问题

问题描述 投票:0回答:1

该项目的目标是在 Infinispan 的帮助下将 Vert.x 应用程序(verticle)连接到集群中,其中 verticle 共享单个复制的缓存集合,该集合也由 Infinispan 管理。

我可以看到 Infinispan 的集群管理功能在部署在 k8s 上时运行良好。以下是kubernetes上的部署信息。

root@vertx-ollama-control-plane:/# kubectl get all
NAME                                       READY   STATUS    RESTARTS        AGE
pod/backend-deployment-9dd7d994c-d6w5m     1/1     Running   1 (7m34s ago)   8m13s
pod/backend-deployment-9dd7d994c-d7ttq     1/1     Running   0               8m13s
pod/backend-deployment-9dd7d994c-zgjhx     1/1     Running   1 (7m34s ago)   8m13s
pod/frontend-deployment-6557dd4466-bf2gj   1/1     Running   0               8m13s
pod/frontend-deployment-6557dd4466-f6fqz   1/1     Running   0               8m13s
pod/ollama-858d4f8c8d-fggwb                1/1     Running   0               8m13s

NAME                    TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
service/clustered-app   ClusterIP      None            <none>        7800/TCP       8m13s
service/frontend        LoadBalancer   10.96.229.141   <pending>     80:32375/TCP   8m13s
service/kubernetes      ClusterIP      10.96.0.1       <none>        443/TCP        3h26m
service/ollama          ClusterIP      10.96.54.50     <none>        11434/TCP      8m13s

NAME                                  READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/backend-deployment    3/3     3            3           8m13s
deployment.apps/frontend-deployment   2/2     2            2           8m13s
deployment.apps/ollama                1/1     1            1           8m13s

NAME                                             DESIRED   CURRENT   READY   AGE
replicaset.apps/backend-deployment-9dd7d994c     3         3         3       8m13s
replicaset.apps/frontend-deployment-6557dd4466   2         2         2       8m13s
replicaset.apps/ollama-858d4f8c8d                1         1         1       8m13s

通过 headless 服务暴露的 verticles 的 IP 地址在这里:

root@vertx-ollama-control-plane:/# kubectl get endpoints
NAME            ENDPOINTS                                                        AGE
clustered-app   10.244.0.80:7800,10.244.0.81:7800,10.244.0.82:7800 + 2 more...   9m16s
frontend        10.244.0.83:8080,10.244.0.84:8080                                9m16s
kubernetes      172.18.0.4:6443                                                  3h27m
ollama          10.244.0.79:11434                                                9m16s

但问题是 Pod 不共享 Infinispan 的缓存(名称为“embeddings”),而是拥有自己的本地集合。这是我尝试通过集群管理共享缓存的一些配置和代码。


        // Configure default cache manager
        DefaultCacheManager cacheManager = new DefaultCacheManager(
                new GlobalConfigurationBuilder()
                        .transport()
                        .defaultTransport()
                        .build()
        );
        clusterManager = new InfinispanClusterManager(cacheManager);

        // Configure the cache for embeddings
        Configuration cacheConfig = new ConfigurationBuilder().clustering()
                .cacheMode(CacheMode.REPL_ASYNC)
                .encoding()
                .mediaType(MediaType.APPLICATION_OBJECT_TYPE)
                .build();
        
        ... 

        if (cacheManager.cacheExists("embeddings")) {
            logger.info(String.format("Cache %s exists with the hashcode of %d on %s node.",
                    "embeddings",
                    cacheManager.getCache("embeddings").hashCode(),
                    cacheManager.getNodeAddress())
            );
            collection = cacheManager.getCache("embeddings");
        } else {
            logger.info(String.format("Cache %s does not exist, a new cache is created on %s node.",
                    "embeddings",
                    cacheManager.getNodeAddress()
            ));
            collection = cacheManager.createCache("embeddings", cacheConfig);
        }

    // ...

    public static void main(String[] args) {
        Vertx.clusteredVertx(new VertxOptions().setClusterManager(clusterManager))
                .compose(v -> v.deployVerticle(new Main()))
                .onFailure(Throwable::printStackTrace);
    }

我在部署 pod 时注入 Jgroup Kubernetes xml 配置文件作为环境,如

-Dvertx.jgroups.config=default-configs/default-jgroups-kubernetes.xml


但是,当在 Infinispan 缓存中多次存储嵌入信息时,我注意到每个 pod 都会创建自己的名为“embeddings”的缓存。

http POST :8080/embed prompt="Llamas are members of the camelid family meaning they're pretty closely related to vicuñas and camels"
HTTP/1.1 200 OK
content-length: 105

Embedding entry stored with key: 451439790
From: backend-deployment-9dd7d994c-zgjhx (Collection Size: 1)

...**(logs from the Pod A)** 
Dec 14, 2024 4:10:26 AM cynicdog.io.api.OllamaAPI
INFO: **Cache embeddings does not exist**, a new cache is created on backend-deployment-9dd7d994c-d6w5m-37175 node
http POST :8080/embed prompt="Llamas are members of the camelid family meaning they're pretty closely related to vicuñas and camels"
HTTP/1.1 200 OK
content-length: 105

Embedding entry stored with key: 451439790
From: backend-deployment-9dd7d994c-d6w5m (Collection Size: 1)

...**(logs from the Pod B) **
INFO: Model mxbai-embed-large:latest pulled.
Dec 14, 2024 4:10:26 AM cynicdog.io.api.OllamaAPI
INFO: **Cache embeddings does not exist**, a new cache is created on backend-deployment-9dd7d994c-d6w5m-37175 node.

我有什么遗漏的吗?如果需要详细检查,我将附加项目 GitHub 存储库:(

https://github.com/CynicDog/Vertx-Kubernetes-Integration/tree/main/clustered-embedding-stores

kubernetes vert.x infinispan
1个回答
0
投票

当您使用自定义缓存管理器创建集群管理器时,该缓存管理器必须配置为所需的缓存

<cache-container default-cache="distributed-cache">
 <distributed-cache name="distributed-cache"/>
 <replicated-cache name="__vertx.subs"/>
 <replicated-cache name="__vertx.haInfo"/>
 <replicated-cache name="__vertx.nodeInfo"/>
 <distributed-cache-configuration name="__vertx.distributed.cache.configuration"/>
</cache-container>

这是 XML 配置,但您应该能够以编程方式执行相同的操作。

其次,您必须按照在 K8S 上配置 Infinispan 的步骤

但请注意,如注释中所示,当使用自定义缓存管理器创建集群管理器时,

vertx.jgroups.config
将被忽略。

因此您必须以编程方式配置 JGroup。

© www.soinside.com 2019 - 2024. All rights reserved.