TECHSTEP

ITインフラ関連の記事を公開してます。

Rook-EdgeFSに入門する

はじめに

今回はRookの管理対象とするストレージベンダーの中からEdgeFSを試します。Rook公式ドキュメントのEdgeFS QuickStartでは、ひとつのKubernetesクラスター上にRook-EdgeFSをデプロイするところまでを紹介しています。今回はEdgeFSクラスターにNFSを作成し、NFSプロトコルを経由してストレージを利用するまでを行いました。

なお、EdgeFSの概要については別記事にて紹介する予定です。

EdgeFSとは

f:id:FY0323:20200205151340p:plain

EdgeFSは、ファイル・ブロック・オブジェクトデータへのアクセスを提供する、高パフォーマンス・耐障害性を備えた分散型Data Fabricです。地理的分散サイトをグローバル名前空間に接続することで、マルチクラスター・マルチリージョンのデータフローが可能なストレージシステムを実現します。RookではCephに続き2番目にStable状態となったプロジェクトです。

f:id:FY0323:20200205150910p:plain
Rook-EdgeFS アーキテクチャ

※参考サイト:

ASCII.jp - アプリケーションに最適なクラウドを実現するData Fabricとは?

検証環境

EdgeFS環境の構築

ここからEdgeFSの構築を行います。手順はRook公式ドキュメントをもとにしました。

利用時の前提条件

  • EdgeFSをスムーズに操作するには、ストレージデバイスあたり1Core・1GBのメモリが最低限必要
  • Target Podに求められるメモリの最小値は4GB
  • dataDirHostPathの場所が5GB以上の空き容量があること

このほか、以下の推奨事項も掲載されています。

  • SSD/NVMeデバイスを最大限利用するには、デバイスあたり2Core・2GBメモリが必要
  • Rawデバイスの利用と利用可能ストレージキャパシティの均一分散が推奨
    • Clusterリソースでspec.storage.deviceを指定
    • EdgeFSによるノードへの自動設定を無効化する場合はskipHostPrepareを有効化
      • デフォルトではEdgeFSをデプロイしたノード上の/etc/sysctl.confに対して、以下のような設定を行い、送受信時のウィンドウサイズやライトバック・メモリキャッシュの度合いなどを変更します。
net.core.rmem_default = 80331648
net.core.rmem_max = 80331648
net.core.wmem_default = 33554432
net.core.wmem_max = 50331648
vm.dirty_ratio = 10
vm.dirty_background_ratio = 5
vm.swappiness = 15

Operatorの作成

EdgeFSを構築する最初のステップは、operator.yamlをデプロイすることです。operator.yamlにはOperator及びEdgeFSを利用するうえで必要な各種リソースが定義されています。利用するoperator.yamlは以下の通りです。

operator.yamlの内容

apiVersion: v1
kind: Namespace
metadata:
  name: rook-edgefs-system
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: clusters.edgefs.rook.io
spec:
  group: edgefs.rook.io
  names:
    kind: Cluster
    listKind: ClusterList
    plural: clusters
    singular: cluster
  scope: Namespaced
  version: v1
  validation:
    openAPIV3Schema:
      properties:
        spec:
          properties:
            edgefsImageName:
              type: string
            dataDirHostPath:
              pattern: ^/(\S+)
              type: string
            devicesResurrectMode:
              pattern: ^(restore|restoreZap|restoreZapWait)$
              type: string
            dashboard:
              properties:
                localAddr:
                  type: string
            network:
              properties:
                serverIfName:
                  type: string
                brokerIfName:
                  type: string
            skipHostPrepare:
              type: boolean
            storage:
              properties:
                nodes:
                  items: {}
                  type: array
                useAllDevices: {}
                useAllNodes:
                  type: boolean
          required:
          - edgefsImageName
          - dataDirHostPath
  additionalPrinterColumns:
    - name: Image
      type: string
      description: Edgefs target image
      JSONPath: .spec.edgefsImageName
    - name: HostPath
      type: string
      description: Directory used on the Kubernetes nodes to store Edgefs data
      JSONPath: .spec.dataDirHostPath
    - name: Age
      type: date
      JSONPath: .metadata.creationTimestamp
    - name: State
      type: string
      description: Current State
      JSONPath: .status.state
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: nfss.edgefs.rook.io
spec:
  group: edgefs.rook.io
  names:
    kind: NFS
    listKind: NFSList
    plural: nfss
    singular: nfs
  scope: Namespaced
  version: v1
  validation:
    openAPIV3Schema:
      properties:
        spec:
          properties:
            instances:
              type: integer
              minimum: 1
          required:
          - instances
  additionalPrinterColumns:
    - name: Instances
      type: string
      description: Edgefs's service instances count
      JSONPath: .spec.instances
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: swifts.edgefs.rook.io
spec:
  group: edgefs.rook.io
  names:
    kind: SWIFT
    listKind: SWIFTList
    plural: swifts
    singular: swift
  scope: Namespaced
  version: v1
  validation:
    openAPIV3Schema:
      properties:
        spec:
          properties:
            instances:
              type: integer
              minimum: 1
          required:
          - instances
  additionalPrinterColumns:
    - name: Instances
      type: string
      description: Edgefs's service instances count
      JSONPath: .spec.instances
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: s3s.edgefs.rook.io
spec:
  group: edgefs.rook.io
  names:
    kind: S3
    listKind: S3List
    plural: s3s
    singular: s3
  scope: Namespaced
  version: v1
  validation:
    openAPIV3Schema:
      properties:
        spec:
          properties:
            instances:
              type: integer
              minimum: 1
          required:
          - instances
  additionalPrinterColumns:
    - name: Instances
      type: string
      description: Edgefs's service instances count
      JSONPath: .spec.instances
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: s3xs.edgefs.rook.io
spec:
  group: edgefs.rook.io
  names:
    kind: S3X
    listKind: S3XList
    plural: s3xs
    singular: s3x
  scope: Namespaced
  version: v1
  validation:
    openAPIV3Schema:
      properties:
        spec:
          properties:
            instances:
              type: integer
              minimum: 1
          required:
          - instances
  additionalPrinterColumns:
    - name: Instances
      type: string
      description: Edgefs's service instances count
      JSONPath: .spec.instances
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: iscsis.edgefs.rook.io
spec:
  group: edgefs.rook.io
  names:
    kind: ISCSI
    listKind: ISCSIList
    plural: iscsis
    singular: iscsi
  scope: Namespaced
  version: v1
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: isgws.edgefs.rook.io
spec:
  group: edgefs.rook.io
  names:
    kind: ISGW
    listKind: ISGWList
    plural: isgws
    singular: isgw
  scope: Namespaced
  version: v1
  validation:
    openAPIV3Schema:
      properties:
        spec:
          properties:
            direction:
              type: string
              pattern: ^(send|receive|send\+receive)$
            remoteURL:
              type: string
            config:
              type: object
              properties:
                server:
                  type: string
                clients:
                  type: array
                  items:
                    type: string
          required:
          - direction
  additionalPrinterColumns:
    - name: Direction
      type: string
      description: ISGW service direction
      JSONPath: .spec.direction
    - name: RemoteEndpoint
      type: string
      description: Remote ISGW service endpoint
      JSONPath: .spec.remoteURL
    - name: Server
      type: string
      JSONPath: .spec.config.server
      description: ISGW server' service name
    - name: Clients
      type: string
      JSONPath: .spec.config.clients
      description: ISGW client' service names
---
# The cluster role for managing all the cluster-specific resources in a namespace
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  name: rook-edgefs-cluster-mgmt
  labels:
    operator: rook
    storage-backend: edgefs
rules:
- apiGroups: [""]
  resources: ["secrets", "pods", "nodes", "services", "configmaps", "endpoints"]
  verbs: ["get", "list", "watch", "patch", "create", "update", "delete"]
- apiGroups: ["apps"]
  resources: ["statefulsets", "statefulsets/scale"]
  verbs: ["create", "delete", "deletecollection", "patch", "update"]
- apiGroups: ["apps"]
  resources: ["deployments", "daemonsets", "replicasets", "statefulsets"]
  verbs: ["get", "list", "watch", "create", "update", "delete"]
---
# The role for the operator to manage resources in the system namespace
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
  name: rook-edgefs-system
  namespace: rook-edgefs-system
  labels:
    operator: rook
    storage-backend: edgefs
rules:
- apiGroups: [""]
  resources: ["pods", "nodes", "configmaps"]
  verbs: ["get", "list", "watch", "patch", "create", "update", "delete"]
- apiGroups: ["apps"]
  resources: ["daemonsets"]
  verbs: ["get", "list", "watch", "create", "update", "delete"]
---
# The cluster role for managing the Rook CRDs
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  name: rook-edgefs-global
  labels:
    operator: rook
    storage-backend: edgefs
rules:
- apiGroups: [""]
  # Pod access is needed for fencing
  # Node access is needed for determining nodes where mons should run
  resources: ["pods", "nodes", "nodes/proxy"]
  verbs: ["get", "list", "watch", "update", "patch"]
- apiGroups: [""]
  # PVs and PVCs are managed by the Rook provisioner
  resources: ["events", "persistentvolumes", "persistentvolumeclaims"]
  verbs: ["get", "list", "watch", "patch", "create", "update", "delete"]
- apiGroups: ["storage.k8s.io"]
  resources: ["storageclasses"]
  verbs: ["get", "list", "watch"]
- apiGroups: ["batch"]
  resources: ["jobs"]
  verbs: ["get", "list", "watch", "create", "update", "delete"]
- apiGroups: ["edgefs.rook.io"]
  resources: ["*"]
  verbs: ["*"]
- apiGroups: ["rook.io"]
  resources: ["*"]
  verbs: ["*"]
---
# The rook system service account used by the operator, agent, and discovery pods
apiVersion: v1
kind: ServiceAccount
metadata:
  name: rook-edgefs-system
  namespace: rook-edgefs-system
  labels:
    operator: rook
    storage-backend: edgefs
---
# Grant the operator, agent, and discovery agents access to resources in its own namespace
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: rook-edgefs-system
  namespace: rook-edgefs-system
  labels:
    operator: rook
    storage-backend: edgefs
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: rook-edgefs-system
subjects:
- kind: ServiceAccount
  name: rook-edgefs-system
  namespace: rook-edgefs-system
---
# Grant the rook system daemons cluster-wide access to manage the Rook CRDs, PVCs, and storage classes
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: rook-edgefs-global
  namespace: rook-edgefs-system
  labels:
    operator: rook
    storage-backend: edgefs
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: rook-edgefs-global
subjects:
- kind: ServiceAccount
  name: rook-edgefs-system
  namespace: rook-edgefs-system
---
# The deployment for the rook operator
apiVersion: apps/v1
kind: Deployment
metadata:
  name: rook-edgefs-operator
  namespace: rook-edgefs-system
  labels:
    operator: rook
    storage-backend: edgefs
spec:
  selector:
    matchLabels:
      app: rook-edgefs-operator
  replicas: 1
  template:
    metadata:
      labels:
        app: rook-edgefs-operator
    spec:
      serviceAccountName: rook-edgefs-system
      containers:
      - name: rook-edgefs-operator
        image: rook/edgefs:master
        imagePullPolicy: "Always"
        args: ["edgefs", "operator"]
        env:
        - name: ROOK_LOG_LEVEL
          value: "INFO"
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        # Rook Discover toleration. Will tolerate all taints with all keys.
        # Choose between NoSchedule, PreferNoSchedule and NoExecute:
        # - name: DISCOVER_TOLERATION
        #   value: "NoSchedule"
        # (Optional) Rook Discover toleration key. Set this to the key of the taint you want to tolerate
        # - name: DISCOVER_TOLERATION_KEY
        #   value: "<KeyOfTheTaintToTolerate>"

GCPを利用する場合、ユーザーに対してRoleを作成する権限を付与する必要がありますが、今回は該当しないためこちらは実施しません。

以下のようにoperator.yamlをデプロイします。

[root@vm0 edgefs]# kubectl apply -f operator.yaml
namespace/rook-edgefs-system created
customresourcedefinition.apiextensions.k8s.io/clusters.edgefs.rook.io created
customresourcedefinition.apiextensions.k8s.io/nfss.edgefs.rook.io created
customresourcedefinition.apiextensions.k8s.io/swifts.edgefs.rook.io created
customresourcedefinition.apiextensions.k8s.io/s3s.edgefs.rook.io created
customresourcedefinition.apiextensions.k8s.io/s3xs.edgefs.rook.io created
customresourcedefinition.apiextensions.k8s.io/iscsis.edgefs.rook.io created
customresourcedefinition.apiextensions.k8s.io/isgws.edgefs.rook.io created
clusterrole.rbac.authorization.k8s.io/rook-edgefs-cluster-mgmt created
role.rbac.authorization.k8s.io/rook-edgefs-system created
clusterrole.rbac.authorization.k8s.io/rook-edgefs-global created
serviceaccount/rook-edgefs-system created
rolebinding.rbac.authorization.k8s.io/rook-edgefs-system created
clusterrolebinding.rbac.authorization.k8s.io/rook-edgefs-global created
deployment.apps/rook-edgefs-operator created

デプロイ後のリソースを確認します。rook-edgefs-operator Podに加え、rook-discover Podが作成されます。rook-discover Podはノード上のストレージデバイスを検索し、利用可能なストレージの情報を収集してリスト化します。

[root@vm0 edgefs]# kubectl get pods -n rook-edgefs-system -o wide
NAME                                    READY   STATUS    RESTARTS   AGE    IP             NODE   NOMINATED NODE   READINESS GATES
rook-discover-jnxpm                     1/1     Running   0          70s    10.244.3.137   vm3    <none>           <none>
rook-discover-ks854                     1/1     Running   0          70s    10.244.2.242   vm2    <none>           <none>
rook-discover-m7vbx                     1/1     Running   0          70s    10.244.1.129   vm1    <none>           <none>
rook-edgefs-operator-66fdc8d49f-t5fsq   1/1     Running   0          109s   10.244.1.128   vm1    <none>           <none>

Clusterの作成

続いてcluster.yamlをデプロイします。cluster.yamlにはClusterリソースの定義のほか、その他必要なリソースが定義されています。テスト用に与えられているcluster.yamlの内容は以下の通りです。

cluster.yamlの内容

apiVersion: v1
kind: Namespace
metadata:
  name: rook-edgefs
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: rook-edgefs-cluster
  namespace: rook-edgefs
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: rook-edgefs-cluster
  namespace: rook-edgefs
rules:
- apiGroups: [""]
  resources: ["configmaps", "endpoints"]
  verbs: [ "get", "list", "watch", "create", "update", "delete" ]
- apiGroups: ["edgefs.rook.io"]
  resources: ["*"]
  verbs: ["*"]
- apiGroups: [""]
  resources: ["pods"]
  verbs: [ "get", "list" ]
- apiGroups: ["extensions"]
  resources: ["deployments/scale"]
  verbs: [ "get", "update" ]
---
# Allow the operator to create resources in this cluster's namespace
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: rook-edgefs-cluster-mgmt
  namespace: rook-edgefs
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: rook-edgefs-cluster-mgmt
subjects:
- kind: ServiceAccount
  name: rook-edgefs-system
  namespace: rook-edgefs-system
---
# Allow the pods in this namespace to work with configmaps
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: rook-edgefs-cluster
  namespace: rook-edgefs
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: rook-edgefs-cluster
subjects:
- kind: ServiceAccount
  name: rook-edgefs-cluster
  namespace: rook-edgefs
---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
  name: privileged
spec:
  fsGroup:
    rule: RunAsAny
  privileged: true
  runAsUser:
    rule: RunAsAny
  seLinux:
    rule: RunAsAny
  supplementalGroups:
    rule: RunAsAny
  volumes:
  - '*'
  allowedCapabilities:
  - '*'
  hostPID: true
  hostIPC: true
  hostNetwork: false
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: privileged-psp-user
rules:
- apiGroups:
  - apps
  resources:
  - podsecuritypolicies
  resourceNames:
  - privileged
  verbs:
  - use
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: rook-edgefs-system-psp
  namespace: rook-edgefs
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: privileged-psp-user
subjects:
- kind: ServiceAccount
  name: rook-edgefs-system
  namespace: rook-edgefs-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: rook-edgefs-cluster-psp
  namespace: rook-edgefs
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: privileged-psp-user
subjects:
- kind: ServiceAccount
  name: rook-edgefs-cluster
  namespace: rook-edgefs
---
apiVersion: edgefs.rook.io/v1
kind: Cluster
metadata:
  name: rook-edgefs
  namespace: rook-edgefs
spec:
  edgefsImageName: edgefs/edgefs:latest  # specify version here, i.e. edgefs/edgefs:1.2.117 etc
  serviceAccount: rook-edgefs-cluster
  dataDirHostPath: /var/lib/edgefs
  #dataVolumeSize: 10Gi
  #devicesResurrectMode: "restoreZapWait"
  #dashboard:
  #  localAddr: 10.3.30.75
  #network: # cluster level networking configuration
  #  provider: host
  #  selectors:
  #    server: "enp2s0f0"
  #    broker: "enp2s0f0"
  #skipHostPrepare: true
  #maxContainerCapacity: 132Ti
  #sysRepCount: 1                  # SystemReplicationCount [1..n](default is 3)
  #failureDomain: "device"         # Cluster's failureDomain ["device", "host", "zone"] (default is "host")
  #trlogProcessingInterval: 2      # set transaction log processing interval to 2s to speed up ISGW Link delivery
  #trlogKeepDays: 2                # keep up to 2 days of transaction log interval batches to reduce local storage overhead
  #useHostLocalTime: true
  storage: # cluster level storage configuration and selection
    useAllNodes: true
  #  directories:
  #  - path: /mnt/disks/ssd0
  #  - path: /mnt/disks/ssd1
  #  - path: /mnt/disks/ssd2
    useAllDevices: true
  #  config:
  #    mdReserved: "30"            # allocate only 30% of offloaded SSD/NVMe slice for Metadata, the rest keep for BCache
  #    hddReadAhead: "2048"        # speed up reads of 2MB+ chunks of HDD (offload use case)
  #    rtVerifyChid: "0"           # may improve CPU utilization
  #    lmdbPageSize: "32768"       # larger value can improve stream operations
  #    lmdbMdPageSize: "4096"      # smaller value can improve metadata offload device utilization
  #    useMetadataOffload: "true"  # enable use of SSD device as metadata offload
  #    useBCache: "true"           # enable SSD cache device and read-cache
  #    useBCacheWB: "true"         # enable SSD write-cache
  #    useMetadataMask: "0x7d"     # all metadata on SSD except second level manifests
  #    rtPLevelOverride: "4"       # enable large device partitioning, only needed if automatic not working
  #    sync: "0"                   # highest performance, consistent on pod/software failures, not-consistent on power failures
  #    useAllSSD: "true"           # use only SSDs during deployment
  #    zone: "1"                   # defines failure domain's zone number for all edgefs nodes
  #  nodes:
  #  - name: node3071ub16
  #  - name: node3072ub16
  #  - name: node3073ub16
  #  - name: node3074ub16 # node level storage configuration
  #    devices: # specific devices to use for storage can be specified for each node
  #    - name: "sdb"
  #    - name: "sdc"
  #    config: # configuration can be specified at the node level which overrides the cluster level config
  #      rtPLevelOverride: 8
  #      zone: "2"  # defines failure domain's zone number for specific node(node3074ub16)
  #resources:
  #  limits:
  #    cpu: "2"
  #    memory: "4096Mi"
  #  requests:
  #    cpu: "2"
  #    memory: "4096Mi"
  # A key value list of annotations
  #annotations:
  #  all:
  #    key: value
  #  mgr:
  #  prepare:
  #  target:
  #placement:
  #  all:
  #    nodeAffinity:
  #      requiredDuringSchedulingIgnoredDuringExecution:
  #        nodeSelectorTerms:
  #        - matchExpressions:
  #          - key: nodekey
  #            operator: In
  #            values:
  #            - edgefs-target
  #    tolerations:
  #    - key: taintKey
  #      operator: Exists

今回はClusterリソースの定義を一部変更してデプロイします。変更後のcluster.yamlClusterリソース部分のみ)は以下の通りです。

---
apiVersion: edgefs.rook.io/v1
kind: Cluster
metadata:
  name: rook-edgefs
  namespace: rook-edgefs
spec:
  edgefsImageName: edgefs/edgefs:latest  # specify version here, i.e. edgefs/edgefs:1.2.117 etc
  serviceAccount: rook-edgefs-cluster
  dataDirHostPath: /data
  storage: # cluster level storage configuration and selection
    useAllNodes: true
    useAllDevices: false
    directories:
    - path: /data

変更点は大きく2箇所です。

  • dataDirHostPathをデフォルトの/var/lib/edgefsから/dataに変更
  • useAllDevicesfalseに変更し、利用するディレクトリを/dataに指定

2つ目の変更について、useAllDevicestrueにした場合edgefs-target Podの作成に失敗したため、利用するデバイスを指定する形に変更しました。また、本来はdevicesを指定してデバイス単位で指定することが推奨ですが、ディレクトリ単位のほうがクラスター削除時の手間も少ないため、今回はディレクトリを指定しました。

上記ファイルをデプロイします。

[root@vm0 edgefs]# kubectl apply -f cluster-clusterwide-directories.yaml
namespace/rook-edgefs created
serviceaccount/rook-edgefs-cluster created
role.rbac.authorization.k8s.io/rook-edgefs-cluster created
rolebinding.rbac.authorization.k8s.io/rook-edgefs-cluster-mgmt created
rolebinding.rbac.authorization.k8s.io/rook-edgefs-cluster created
podsecuritypolicy.policy/privileged created
clusterrole.rbac.authorization.k8s.io/privileged-psp-user created
clusterrolebinding.rbac.authorization.k8s.io/rook-edgefs-system-psp created
clusterrolebinding.rbac.authorization.k8s.io/rook-edgefs-cluster-psp created
cluster.edgefs.rook.io/rook-edgefs created

デプロイ後のリソース状況を確認します

[root@vm0 edgefs]# kubectl get pods -n rook-edgefs -w


# デプロイ直後

NAME                     READY   STATUS              RESTARTS   AGE
host-prepare-vm2-wqfws   0/1     ContainerCreating   0          2s
host-prepare-vm2-wqfws   0/1     Completed           0          3s
host-prepare-vm2-wqfws   0/1     Terminating         0          3s
host-prepare-vm2-wqfws   0/1     Terminating         0          3s


# デプロイ完了前

NAME                     READY   STATUS              RESTARTS   AGE
rook-edgefs-mgr-6c8f8548bd-mxnnr   0/3     Pending             0          0s
rook-edgefs-target-0               0/3     Pending             0          0s
rook-edgefs-mgr-6c8f8548bd-mxnnr   0/3     Pending             0          0s
rook-edgefs-target-0               0/3     Pending             0          0s
rook-edgefs-target-1               0/3     Pending             0          0s
rook-edgefs-target-2               0/3     Pending             0          0s
rook-edgefs-target-1               0/3     Pending             0          1s
rook-edgefs-target-2               0/3     Pending             0          0s
rook-edgefs-target-0               0/3     ContainerCreating   0          1s
rook-edgefs-mgr-6c8f8548bd-mxnnr   0/3     ContainerCreating   0          1s
rook-edgefs-target-1               0/3     ContainerCreating   0          1s
rook-edgefs-target-2               0/3     ContainerCreating   0          0s


# デプロイ完了後

[root@vm0 edgefs]# kubectl get pods -n rook-edgefs -o wide
NAME                               READY   STATUS    RESTARTS   AGE    IP             NODE   NOMINATED NODE   READINESS GATES
rook-edgefs-mgr-6c8f8548bd-mxnnr   3/3     Running   0          2m5s   10.244.3.140   vm3    <none>           <none>
rook-edgefs-target-0               3/3     Running   0          2m5s   10.244.2.245   vm2    <none>           <none>
rook-edgefs-target-1               3/3     Running   0          2m5s   10.244.1.131   vm1    <none>           <none>
rook-edgefs-target-2               3/3     Running   0          2m4s   10.244.3.139   vm3    <none>           <none>

まずKubernetesクラスターの各ノードごとに準備用コンテナ(host-prepare-ホスト名)を起動し、その後rook-edgefs-mgr rook-edgefs-targetの作成を開始・完了します。

rook-edgefs-mgrCSI pluginからのリクエストを受け取り、負荷分散などを行うプロキシであり、また後程実行するefscliコマンドを実行するためのToolboxとしての機能も持ちます。

rook-edgefs-targetはEdgeFSにおけるデータノードであり、ノード上のストレージデバイスを扱います。

Dashboardへのアクセス

EdgeFSではDashboardも提供しています。EdgeFSのクラスターを作成した時点で、EdgeFS Dashboardは利用可能な状態となっています。クラスター作成時、rook-edgefs-uiというServiceが一緒に作成されます。

[root@vm0 edgefs]# kubectl get service/rook-edgefs-ui -n rook-edgefs
NAME             TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)             AGE
rook-edgefs-ui   ClusterIP   10.96.207.131   <none>        3000/TCP,3443/TCP   4m1s

Dashboardにアクセスするには、クラスター外からアクセスするためのServiceを用意する必要があります。今回は作成済みのrook-edgefs-uiを編集し、NodePortに変更することでアクセスを可能にしました。

[root@vm0 edgefs]# kubectl edit svc rook-edgefs-ui -n rook-edgefs

(中略)

spec:
  clusterIP: 10.96.207.131
  ports:
  - name: http-ui
    port: 3000
    protocol: TCP
    targetPort: 3000
  - name: https-ui
    port: 3443
    protocol: TCP
    targetPort: 3443
  selector:
    app: rook-edgefs-mgr
    rook_cluster: rook-edgefs
  sessionAffinity: None
  type: ClusterIP  # ここをNodePortに変更

service/rook-edgefs-ui edited
[root@vm0 edgefs]# kubectl get service/rook-edgefs-ui -n rook-edgefs
NAME             TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)                         AGE
rook-edgefs-ui   NodePort   10.96.207.131   <none>        3000:31801/TCP,3443:30237/TCP   5m26s

修正後、rook-edgefs-mgrPodの稼働しているノードのグローバルIPアドレスを確認し、http://{ノードのグローバルIPアドレス}:31801あるいはhttps://{ノードのグローバルIPアドレス}:30237にブラウザからアクセスします。

f:id:FY0323:20200205152542p:plain

上記ログイン画面が表示されたら、デフォルトで作成されるIDのadminを用いてログインします。ログイン情報は以下の通りです。

  • Username: admin
  • Password: edgefs

f:id:FY0323:20200205152720p:plain

ログインを実行すると、後述するシステムの初期化を行っていない場合、上のような画面が表示されます。ここではそのまま「Log In」ボタンを押下します。

f:id:FY0323:20200205152800p:plain

ログイン後、上のような画面が表示され、各種情報の確認や設定が可能になります。

※参考リンク:

Rook Doc - EdgeFS Dashboard and User Interface

NFSの作成

EdgeFSクラスターが作成できたので、ストレージリソースを利用するための準備を行います。

新規でNamespaceやローカルサイトを作成した場合、まずはFlexHashとRoot Objectの初期化を行う必要があります。

FlexHashはデータストレージ用の最適なターゲット、または現在の負荷に基づくデータアクセス先を自動的に選択する、動的なハッシュ化を提供する機能です。I/Oディレクションの責任を担い、動的負荷分散において重要な働きを担います。

Root Objectはシステム情報とローカルサイトに登録されているNamespaceテーブルを所持します。これらの情報はサイト間でやり取りすることはありません。

EdgeFSクラスターの各種設定は、rook-edgefs-mgr Podにログインし、efscliコマンドを利用します。EdgeFSでは/Cluster Namespace/Tenant/Bucketの階層でデータを扱います。作成したBucketと利用するストレージサービス(ここではNFS)とを紐づけ、Bucketをデータ格納先とします。以降では、これらリソース・サービスの作成と紐づけをrook-edgefs-mgr Podから実行します。

# rook-edgefs-mgr Podにログイン

[root@vm0 edgefs]# kubectl -n rook-edgefs exec -it rook-edgefs-mgr-6c8f8548bd-mxnnr -- env COLUMNS=$COLUMNS LINES=$LINES TERM=linux toolbox
Defaulting container name to rook-edgefs-mgr.
Use 'kubectl describe pod/rook-edgefs-mgr-6c8f8548bd-mxnnr -n rook-edgefs' to see all of the containers in this pod.

Welcome to EdgeFS Mgmt Toolbox.
Hint: type neadm or efscli to begin

root@rook-edgefs-mgr-6c8f8548bd-mxnnr:/opt/nedge# 

ここからefscli systemコマンドを利用してシステムの初期化を行います。

# システムの状態確認

root@rook-edgefs-mgr-6c8f8548bd-mxnnr:/opt/nedge# efscli system status

                SID                | HOST |          POD           | USED,% | STATE
+----------------------------------+------+------------------------+--------+--------+
  946500507EADD4356B8482618A071120 | vm1  | rook-edgefs-target-1-0 |  0.00  | ONLINE
  F87AE94509D78E49400FE17A72A49F3D | vm3  | rook-edgefs-target-2-0 |  0.00  | ONLINE
  6F16E10582BE5C63A37C4E3B70ACA2C7 | vm2  | rook-edgefs-target-0-0 |  0.00  | ONLINE


# システムの初期化

root@rook-edgefs-mgr-6c8f8548bd-mxnnr:/opt/nedge# efscli system init

System Initialization
=====================

Please review auto discovered FlexHash Table:

pid 14653
genid 1580657803967338
failure_domain 1
vdevcount 3
numrows 8
leader 0
servercount 3
zonecount 0
from_checkpoint 1

Please confirm initial configuration? [y/n]: y  #yを入力
Sent message to daemon: FH_CPSET.1580657803967338
Successfully set FlexHash table to GenID=1580657803967338
System GUID: 02D672EA4AA74CA58D48F70668A6B7C2
root@rook-edgefs-mgr-6c8f8548bd-mxnnr:/opt/nedge#

初期化が完了したので、EdgeFSクラスターで利用する各種リソースを作成します。まずefscli clusterコマンドでCluster Namespaceを作成します。

# Cluster Namespace "Hawaii"を作成

root@rook-edgefs-mgr-6c8f8548bd-mxnnr:/opt/nedge# efscli cluster create Hawaii


# Cluster Namespace作成後の確認

root@rook-edgefs-mgr-6c8f8548bd-mxnnr:/opt/nedge# efscli cluster list
Hawaii
root@rook-edgefs-mgr-6c8f8548bd-mxnnr:/opt/nedge# efscli cluster show Hawaii
ccow-cluster-id: Hawaii
ccow-tenant-id:
ccow-bucket-id:
ccow-object-id:
ccow-parent-hash-id: B7B797752E0A17039A131D39054CD8E3C0D88687E757B1621413114CCF9DF282
ccow-cluster-hash-id: 567096F3289C2C4BEE476249F994B55C0A9D4DF3961544C8840A8B832E859132
ccow-name-hash-id: 567096F3289C2C4BEE476249F994B55C0A9D4DF3961544C8840A8B832E859132
ccow-tenant-hash-id: 0000000000000000000000000000000000000000000000000000000000000000
ccow-bucket-hash-id: 0000000000000000000000000000000000000000000000000000000000000000
ccow-object-hash-id: 0000000000000000000000000000000000000000000000000000000000000000
ccow-vm-content-hash-id: 7435A77E4D4215CC6E951AD8F4FF8BBFE22524FA1A39F22B2A7ECADA8E5FBA4A
ccow-uvid-src-guid: F87AE94509D78E4902D672EA4AA74CA5
ccow-logical-size: 0
ccow-prev-logical-size: 0
ccow-object-count: 0
ccow-uvid-src-cookie: 2388604273
ccow-uvid-timestamp: 1580658500954670
ccow-creation-time: 1580658500912482
ccow-tx-generation-id: 2
ccow-object-deleted: 0
ccow-chunkmap-type: btree_key_val
ccow-chunkmap-chunk-size: 1048576
ccow-chunkmap-btree-order: 192
ccow-chunkmap-btree-marker: 0
ccow-hash-type: 1
ccow-compress-type: 1
ccow-estimated-used: 0
ccow-replication-count: 3
ccow-sync-put: 0
ccow-select-policy: 4
ccow-failure-domain: 1
ccow-number-of-versions: 1
ccow-track-statistics: 1
ccow-iops-rate-lim: 0
ccow-ec-enabled: 0
ccow-ec-data-mode: 132610
ccow-ec-trigger-policy: 230400
ccow-file-object-transparency: 0
ccow-object-delete-after: 0
ccow-inline-data-flags: 0
root@rook-edgefs-mgr-6c8f8548bd-mxnnr:/opt/nedge#

続いてCluster Namespace内にテナントを作成します。

# Cola テナント作成

root@rook-edgefs-mgr-6c8f8548bd-mxnnr:/opt/nedge# efscli tenant create Hawaii/Cola


# テナント作成後の確認

root@rook-edgefs-mgr-6c8f8548bd-mxnnr:/opt/nedge# efscli tenant list Hawaii
Cola
root@rook-edgefs-mgr-6c8f8548bd-mxnnr:/opt/nedge# efscli tenant show Hawaii/Cola
ccow-cluster-id: Hawaii
ccow-tenant-id: Cola
ccow-bucket-id:
ccow-object-id:
ccow-parent-hash-id: 567096F3289C2C4BEE476249F994B55C0A9D4DF3961544C8840A8B832E859132
ccow-cluster-hash-id: 567096F3289C2C4BEE476249F994B55C0A9D4DF3961544C8840A8B832E859132
ccow-name-hash-id: 7FD4E80FBEBEDB2D9BE902E6E96EC6E3AF6C67981C66833F71A0C431C97A508F
ccow-tenant-hash-id: 7FD4E80FBEBEDB2D9BE902E6E96EC6E3AF6C67981C66833F71A0C431C97A508F
ccow-bucket-hash-id: 0000000000000000000000000000000000000000000000000000000000000000
ccow-object-hash-id: 0000000000000000000000000000000000000000000000000000000000000000
ccow-vm-content-hash-id: 73A7D1635B705A15FCD1C4469403045B2BB92468EE109152FB4274107C37C668
ccow-uvid-src-guid: F87AE94509D78E4902D672EA4AA74CA5
ccow-logical-size: 0
ccow-prev-logical-size: 0
ccow-object-count: 0
ccow-uvid-src-cookie: 3378460092
ccow-uvid-timestamp: 1580658579117518
ccow-creation-time: 1580658578813248
ccow-tx-generation-id: 2
ccow-object-deleted: 0
ccow-chunkmap-type: btree_key_val
ccow-chunkmap-chunk-size: 1048576
ccow-chunkmap-btree-order: 48
ccow-chunkmap-btree-marker: 0
ccow-hash-type: 1
ccow-compress-type: 1
ccow-estimated-used: 0
ccow-replication-count: 3
ccow-sync-put: 0
ccow-select-policy: 4
ccow-failure-domain: 1
ccow-number-of-versions: 1
ccow-track-statistics: 1
ccow-iops-rate-lim: 0
ccow-ec-enabled: 0
ccow-ec-data-mode: 132610
ccow-ec-trigger-policy: 230400
ccow-file-object-transparency: 0
ccow-object-delete-after: 0
ccow-inline-data-flags: 0
root@rook-edgefs-mgr-6c8f8548bd-mxnnr:/opt/nedge#

続いてバケットを作成します。

# bk1 バケット作成

root@rook-edgefs-mgr-6c8f8548bd-mxnnr:/opt/nedge# efscli bucket create Hawaii/Cola/bk1


# バケット作成後の確認

root@rook-edgefs-mgr-6c8f8548bd-mxnnr:/opt/nedge# efscli bucket list Hawaii/Cola
bk1
root@rook-edgefs-mgr-6c8f8548bd-mxnnr:/opt/nedge# efscli bucket show Hawaii/Cola/bk1
ccow-cluster-id: Hawaii
ccow-tenant-id: Cola
ccow-bucket-id: bk1
ccow-object-id:
ccow-parent-hash-id: 7FD4E80FBEBEDB2D9BE902E6E96EC6E3AF6C67981C66833F71A0C431C97A508F
ccow-cluster-hash-id: 567096F3289C2C4BEE476249F994B55C0A9D4DF3961544C8840A8B832E859132
ccow-name-hash-id: DEFB413E609B7AF8F36064D38C70DD4B8CF3943E0221B14F3A7490E790184CC1
ccow-tenant-hash-id: 7FD4E80FBEBEDB2D9BE902E6E96EC6E3AF6C67981C66833F71A0C431C97A508F
ccow-bucket-hash-id: DEFB413E609B7AF8F36064D38C70DD4B8CF3943E0221B14F3A7490E790184CC1
ccow-object-hash-id: 0000000000000000000000000000000000000000000000000000000000000000
ccow-vm-content-hash-id: B516CA7C66ED52A9297B3823F3C211658CA1CE768C419C6D5C4176391C4A85D7
ccow-uvid-src-guid: F87AE94509D78E4902D672EA4AA74CA5
ccow-logical-size: 0
ccow-prev-logical-size: 0
ccow-object-count: 0
ccow-uvid-src-cookie: 1677670041
ccow-uvid-timestamp: 1580658656829809
ccow-creation-time: 1580658656829809
ccow-tx-generation-id: 1
ccow-object-deleted: 0
ccow-chunkmap-type: btree_key_val
ccow-chunkmap-chunk-size: 1048576
ccow-chunkmap-btree-order: 48
ccow-chunkmap-btree-marker: 0
ccow-hash-type: 1
ccow-compress-type: 1
ccow-estimated-used: 0
ccow-replication-count: 3
ccow-sync-put: 0
ccow-select-policy: 4
ccow-failure-domain: 1
ccow-number-of-versions: 1
ccow-track-statistics: 1
ccow-iops-rate-lim: 0
ccow-ec-enabled: 0
ccow-ec-data-mode: 132610
ccow-ec-trigger-policy: 230400
ccow-file-object-transparency: 0
ccow-object-delete-after: 0
ccow-inline-data-flags: 32
root@rook-edgefs-mgr-6c8f8548bd-mxnnr:/opt/nedge#

格納先となるバケットを作成したので、続いてNFSサービスを立ち上げます。

# nfs-cola NFSサービスの作成

root@rook-edgefs-mgr-6c8f8548bd-mxnnr:/opt/nedge# efscli service create nfs nfs-cola


# サービス作成後の確認

root@rook-edgefs-mgr-6c8f8548bd-mxnnr:/opt/nedge# efscli service list
nfs-cola
root@rook-edgefs-mgr-6c8f8548bd-mxnnr:/opt/nedge# efscli service show nfs-cola
X-Service-Name: nfs-cola
X-Service-Type: nfs
X-Description: NFS Server
X-Servers: -
X-Status: disabled
X-Auth-Type: disabled
X-MH-ImmDir: 1
[
]
root@rook-edgefs-mgr-6c8f8548bd-mxnnr:/opt/nedge#

サービスを作成したので、事前に用意したHawaii/Cola/bk1バケットと紐づけます。

# バケットとサービスの紐づけ

root@rook-edgefs-mgr-6c8f8548bd-mxnnr:/opt/nedge# efscli service serve nfs-cola Hawaii/Cola/bk1


# 確認

Serving new export 2,Cola/bk1@Hawaii/Cola/bk1
root@rook-edgefs-mgr-6c8f8548bd-mxnnr:/opt/nedge# efscli service show nfs-cola
X-Service-Name: nfs-cola
X-Service-Type: nfs
X-Description: NFS Server
X-Servers: -
X-Status: disabled
X-Auth-Type: disabled
X-MH-ImmDir: 1
[
  2,Cola/bk1@Hawaii/Cola/bk1
]
root@rook-edgefs-mgr-6c8f8548bd-mxnnr:/opt/nedge#

上記作業でNFSリソースを作成する準備は完了しました。

次にNFSリソースを作成します。今回はnfs-colaサービスを利用するため、metadata.namenfs-colaに指定する必要があります。利用したyamlファイルは以下の通りです。

apiVersion: edgefs.rook.io/v1
kind: NFS
metadata:
  name: nfs-cola
  namespace: rook-edgefs
spec:
  instances: 1

上記ファイルをデプロイします。

# NFSリソース作成前の状態

[root@vm0 edgefs]# kubectl get all -n rook-edgefs
NAME                                   READY   STATUS    RESTARTS   AGE
pod/rook-edgefs-mgr-6c8f8548bd-mxnnr   3/3     Running   0          25m
pod/rook-edgefs-target-0               3/3     Running   0          25m
pod/rook-edgefs-target-1               3/3     Running   0          25m
pod/rook-edgefs-target-2               3/3     Running   0          25m

NAME                          TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
service/rook-edgefs-mgr       ClusterIP   10.96.134.31    <none>        6789/TCP                     25m
service/rook-edgefs-restapi   ClusterIP   10.96.164.43    <none>        8881/TCP,8080/TCP,4443/TCP   25m
service/rook-edgefs-target    ClusterIP   None            <none>        <none>                       25m
service/rook-edgefs-ui        ClusterIP   10.96.248.168   <none>        3000/TCP,3443/TCP            25m

NAME                              READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/rook-edgefs-mgr   1/1     1            1           25m

NAME                                         DESIRED   CURRENT   READY   AGE
replicaset.apps/rook-edgefs-mgr-6c8f8548bd   1         1         1       25m

NAME                                  READY   AGE
statefulset.apps/rook-edgefs-target   3/3     25m


# NFSデプロイ

[root@vm0 edgefs]# kubectl apply -f nfs-cola.yaml
nfs.edgefs.rook.io/nfs-cola created


# デプロイ後の状態

[root@vm0 edgefs]# kubectl get all -n rook-edgefs
NAME                                            READY   STATUS    RESTARTS   AGE
pod/rook-edgefs-mgr-6c8f8548bd-mxnnr            3/3     Running   0          26m
pod/rook-edgefs-nfs-nfs-cola-684fc8544b-gtrn6   1/1     Running   0          32s★
pod/rook-edgefs-target-0                        3/3     Running   0          26m
pod/rook-edgefs-target-1                        3/3     Running   0          26m
pod/rook-edgefs-target-2                        3/3     Running   0          26m

NAME                               TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                                                                                                               AGE
service/rook-edgefs-mgr            ClusterIP   10.96.134.31    <none>        6789/TCP                                                                                                              26m
service/rook-edgefs-nfs-nfs-cola   ClusterIP   10.96.219.197   <none>        49000/TCP,2049/TCP,2049/UDP,32803/TCP,32803/UDP,20048/TCP,20048/UDP,111/TCP,111/UDP,662/TCP,662/UDP,875/TCP,875/UDP   32s★
service/rook-edgefs-restapi        ClusterIP   10.96.164.43    <none>        8881/TCP,8080/TCP,4443/TCP                                                                                            26m
service/rook-edgefs-target         ClusterIP   None            <none>        <none>                                                                                                                26m
service/rook-edgefs-ui             ClusterIP   10.96.248.168   <none>        3000/TCP,3443/TCP                                                                                                     26m

NAME                                       READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/rook-edgefs-mgr            1/1     1            1           26m
deployment.apps/rook-edgefs-nfs-nfs-cola   1/1     1            1           32s★

NAME                                                  DESIRED   CURRENT   READY   AGE
replicaset.apps/rook-edgefs-mgr-6c8f8548bd            1         1         1       26m
replicaset.apps/rook-edgefs-nfs-nfs-cola-684fc8544b   1         1         1       32s★

NAME                                  READY   AGE
statefulset.apps/rook-edgefs-target   3/3     26m
[root@vm0 edgefs]#

★部分が新規に作成されたリソースです。

ストレージリソースの利用

NFSリソースを作成すると、その時点でNFSプロトコル経由でストレージを利用できます。NFSリソース作成時に合わせて作られるServiceのIPアドレスを指定して、showmountコマンドを実行すると、以下のように利用可能であることが確認できます。

[root@vm0 edgefs]# showmount -e 10.96.219.197
Export list for 10.96.219.197:
/Cola/bk1 (everyone)
[root@vm0 edgefs]#

Kubernetes masterノード上にマウントしてみます。

[root@vm0 edgefs]# mkdir /Hawaii
[root@vm0 edgefs]# mount -t nfs 10.96.219.197:/Cola/bk1 /Hawaii
[root@vm0 edgefs]# df -h
Filesystem               Size  Used Avail Use% Mounted on
devtmpfs                 3.9G     0  3.9G   0% /dev
tmpfs                    3.9G     0  3.9G   0% /dev/shm
tmpfs                    3.9G   11M  3.9G   1% /run
tmpfs                    3.9G     0  3.9G   0% /sys/fs/cgroup
/dev/sda2                 30G  4.0G   26G  14% /
/dev/sdc1                 99G   61M   94G   1% /data
/dev/sda1                497M   62M  436M  13% /boot
/dev/sdb1                 16G   45M   15G   1% /mnt/resource

(中略)

tmpfs                    797M     0  797M   0% /run/user/1000
10.96.219.197:/Cola/bk1  512T     0  512T   0% /Hawaii★
[root@vm0 edgefs]#

上記ディレクトリ上にファイルを置くと、efscliコマンドからは以下のように見えます。

root@rook-edgefs-mgr-6c8f8548bd-j5m7h:/opt/nedge# efscli system status

                SID                | HOST |          POD           | USED,% | STATE
+----------------------------------+------+------------------------+--------+--------+
  6F16E10582BE5C63A37C4E3B70ACA2C7 | vm2  | rook-edgefs-target-1-0 |  0.03  | ONLINE
  F87AE94509D78E49400FE17A72A49F3D | vm3  | rook-edgefs-target-0-0 |  0.03  | ONLINE
  946500507EADD4356B8482618A071120 | vm1  | rook-edgefs-target-2-0 |  0.01  | ONLINE

root@rook-edgefs-mgr-6c8f8548bd-j5m7h:/opt/nedge#

また、アプリケーションPodから利用するには、StorageClass PersistentVolumeリソースを作成し、PersistentVolumeClaimで紐づけて利用します。

# StorageClass

[root@vm0 edgefs]# kubectl apply -f storage-class.yaml
storageclass.storage.k8s.io/local-storage created


# PersistentVolume

[root@vm0 edgefs]# kubectl apply -f persistent-volume.yaml
persistentvolume/edgefs-data-0 created
persistentvolume/edgefs-data-1 created
persistentvolume/edgefs-data-2 created


# PersistentVolumeClaim

[root@vm0 edgefs]# kubectl apply -f pvc.yaml
persistentvolumeclaim/edgefs-pvc created


# Pod
[root@vm0 edgefs]# kubectl apply -f pod.yaml
pod/edgefs-demo-pod created


# デプロイ後の確認
[root@vm0 edgefs]# kubectl get pvc
NAME         STATUS   VOLUME          CAPACITY   ACCESS MODES   STORAGECLASS    AGE
edgefs-pvc   Bound    edgefs-data-1   100Gi      RWO            local-storage   91s

[root@vm0 edgefs]# kubectl get pods
NAME              READY   STATUS    RESTARTS   AGE
edgefs-demo-pod   1/1     Running   0          20s

上記で利用したyamlファイルは以下の通りです。

storage-class.yamlの内容

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer

persistent-volume.yamlの内容

---
kind: PersistentVolume
apiVersion: v1
metadata:
  name: edgefs-data-0
  namespace: rook-edgefs
  labels:
    type: local
spec:
  storageClassName: local-storage
  persistentVolumeReclaimPolicy: Retain
  capacity:
    storage: 100Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/mnt/edgefs"
---
kind: PersistentVolume
apiVersion: v1
metadata:
  name: edgefs-data-1
  namespace: rook-edgefs
  labels:
    type: local
spec:
  storageClassName: local-storage
  persistentVolumeReclaimPolicy: Retain
  capacity:
    storage: 100Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/mnt/edgefs"
---
kind: PersistentVolume
apiVersion: v1
metadata:
  name: edgefs-data-2
  namespace: rook-edgefs
  labels:
    type: local
spec:
  storageClassName: local-storage
  persistentVolumeReclaimPolicy: Retain
  capacity:
    storage: 100Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/mnt/edgefs"

pvc.yamlの内容

---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: edgefs-pvc
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
  storageClassName: local-storage

pod.yamlの内容

---
apiVersion: v1
kind: Pod
metadata:
  name: edgefs-demo-pod
spec:
  containers:
   - name: web-server
     image: nginx
     volumeMounts:
       - name: mypvc
         mountPath: /var/lib/www/html
  volumes:
   - name: mypvc
     persistentVolumeClaim:
       claimName: edgefs-pvc
       readOnly: false

※参考リンク:

Rook Doc - EdgeFS Scale-Out NFS CRD

クラスターの削除

作業完了後にクラスターを削除します。削除内容は、デプロイした各種リソースをdeleteするのに加え、EdgeFSのデータ用に利用したディレクトリ・デバイスをクリーンアップすることになります。

# 各種リソース削除
[root@vm0 edgefs]# kubectl delete -f pod.yaml
[root@vm0 edgefs]# kubectl delete -f pvc.yaml
[root@vm0 edgefs]# kubectl delete -f persistent-volume.yaml
[root@vm0 edgefs]# kubectl delete -f storage-class.yaml
[root@vm0 edgefs]# kubectl delete -f cluster-clusterwide-directories.yaml

[root@vm0 edgefs]# kubectl get all -n rook-edgefs
No resources found in rook-edgefs namespace.

[root@vm0 edgefs]# kubectl delete -f operator.yaml

[root@vm0 edgefs]# kubectl get all -n rook-edgefs-system
No resources found in rook-edgefs-system namespace.

# ノード上のディレクトリ削除

[root@vm1 ~]# ll /data
total 4
drwx------ 10 root root 4096 Feb  2 15:36 parts
[root@vm1 ~]# rm -rf /data/*
[root@vm1 ~]# ll /data
total 0

[root@vm2 ~]# ll /data
total 16
drwxr-xr-x  3 root root 4096 Feb  2 03:00 mon-a
drwxr--r--  4  167  167 4096 Feb  2 03:00 osd0
drwx------ 10 root root 4096 Feb  2 15:36 parts
drwxr-xr-x  4 root root 4096 Feb  2 03:00 rook-ceph
[root@vm2 ~]# rm -rf /data/*
[root@vm2 ~]# ll /data
total 0

[root@vm3 ~]# ll /data
total 12
drwxr--r--  4  167  167 4096 Feb  2 03:00 osd2
drwx------ 10 root root 4096 Feb  2 15:36 parts
drwxr-xr-x  4 root root 4096 Feb  2 03:00 rook-ceph
[root@vm3 ~]# rm -rf /data/*
[root@vm3 ~]# ll /data
total 0

参考ドキュメント

EdgeFS Data Fabric Quickstart

Rook Doc - EdgeFS Cluster CRD

makotow’s blog - Rook: EdgeFS NFSサービスをデプロイする