kubernetes的共享目录在它的副本之间进行部署

问题描述 投票:3回答:1

我有一个简单的部署,有2个副本。

我希望每个副本都有相同的存储文件夹(共享应用程序上传文件夹)

我一直在玩索赔和卷,但没有得到优势,所以要求快速帮助/示例。

apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: 'test-tomcat'
      labels:
        app: test-tomcat
    spec:
      selector:
        matchLabels:
          app: test-tomcat
      replicas: 3
      template:
        metadata:
          name: 'test-tomcat'
          labels:
            app: test-tomcat
        spec:
          volumes:
          - name: 'data'
            persistentVolumeClaim:
              claimName: claim
          containers:
          - image: 'tomcat:9-alpine'
            volumeMounts:
            - name: 'data'
              mountPath: '/app/data'
            imagePullPolicy: Always
            name: 'tomcat'
            command: ['bin/catalina.sh', 'jpda', 'run']

kind: PersistentVolume
apiVersion: v1
metadata:
  name: volume
  labels:
    type: local
spec:
  storageClassName: manual
  capacity:
    storage: 2Gi
  accessModes:
  - ReadWriteMany
  hostPath:
    path: "/mnt/data"

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: claim
spec:
  storageClassName: manual
  accessModes:
  - ReadWriteMany
  resources:
    requests:
      storage: 1Gi
kubernetes persistent-volumes persistent-volume-claims
1个回答
7
投票

首先,您需要确定要使用的持久卷类型。以下是内部部署集群的几个示例:

  • HostPath - 节点上的本地路径。因此,如果第一个Pod位于Node1上而第二个Pod位于Node2上,则存储将不同。若要解决此问题,您可以使用以下选项之一。 HostPath的示例: kind: PersistentVolume apiVersion: v1 metadata: name: example-pv labels: type: local spec: storageClassName: manual capacity: storage: 3Gi accessModes: - ReadWriteOnce hostPath: path: "/mnt/data"
  • NFS - 该类型的PersistentVolume使用网络文件系统。 NFS是一种分布式文件系统协议,允许您在服务器上安装远程目录。在Kubernetes中使用NFS之前需要安装NFS服务器;这是How To Set Up an NFS Mount on Ubuntu的例子。 Kubernetes中的示例: apiVersion: v1 kind: PersistentVolume metadata: name: example-pv spec: capacity: storage: 3Gi volumeMode: Filesystem accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Recycle storageClassName: slow mountOptions: - hard - nfsvers=4.1 nfs: path: /tmp server: 172.17.0.2
  • GlusterFS - GlusterFS是一个可扩展的分布式文件系统,它将来自多个服务器的磁盘存储资源聚合到一个全局命名空间中。对于NFS,您需要在Kubernetes中使用之前安装GlusterFS;这里是link的说明,one更多的样本。 Kubernetes中的示例: apiVersion: v1 kind: PersistentVolume metadata: name: example-pv annotations: pv.beta.kubernetes.io/gid: "590" spec: capacity: storage: 3Gi accessModes: - ReadWriteMany glusterfs: endpoints: glusterfs-cluster path: myVol1 readOnly: false persistentVolumeReclaimPolicy: Retain --- apiVersion: v1 kind: Service metadata: name: glusterfs-cluster spec: ports: - port: 1 --- apiVersion: v1 kind: Endpoints metadata: name: glusterfs-cluster subsets: - addresses: - ip: 192.168.122.221 ports: - port: 1 - addresses: - ip: 192.168.122.222 ports: - port: 1 - addresses: - ip: 192.168.122.223 ports: - port: 1

创建PersistentVolume后,您需要创建PersistaentVolumeClaim。 PersistaentVolumeClaim是Pod用于从存储请求卷的资源。创建PersistentVolumeClaim后,Kubernetes控制平面会查找满足索赔要求的PersistentVolume。例:

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: example-pv-claim
spec:
  storageClassName: manual
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 3Gi

最后一步,您需要配置Pod以使用PersistentVolumeClaim。这是一个例子:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: 'test-tomcat'
  labels:
    app: test-tomcat
spec:
  selector:
    matchLabels:
      app: test-tomcat
  replicas: 3
  template:
    metadata:
      name: 'test-tomcat'
      labels:
        app: test-tomcat
    spec:
      volumes:
      - name: 'data'
        persistentVolumeClaim:
          claimName: example-pv-claim #name of the claim should be the same as defined before
      containers:
      - image: 'tomcat:9-alpine'
        volumeMounts:
        - name: 'data'
          mountPath: '/app/data'
        imagePullPolicy: Always
        name: 'tomcat'
        command: ['bin/catalina.sh', 'jpda', 'run']
© www.soinside.com 2019 - 2024. All rights reserved.