在 Pod 副本之间共享存储

日常维护

请参阅入门文档以了解安装和设置信息

  • microk8s

  • UCS 工具

    • 访问 NGC

    • 设置仓库

简介

在本教程中,我们将展示如何在微服务的 Pod 副本之间共享存储。

本教程是教程 添加存储 的延续。

更新微服务和应用程序

共享存储的工作不需要在 manifest 中进行特殊更改。但是,我们将在 manifest 中进行一些更改,以演示每个 pod 副本都可以访问存储。

更新 HTTP 服务器微服务 http-server 的微服务 manifest,如下所示

type: msapplication
specVersion: 2.5.0
name: ucf.svc.http-server
chartName: http-server
description: http server
version: 0.0.3
tags: []
keywords: []
publish: false
ingress-endpoints:
  - name: http
    description: REST API endpoint
    protocol: TCP
    scheme: http
    mandatory: False
    data-flow: in-out
---
spec:
  - name: http-server-deployment
    type: ucf.k8s.app.deployment
    parameters:
      apptype: stateless

  - name: http-server-container
    type: ucf.k8s.container
    parameters:
      image:
        repository: nvcr.io/nvidia/pytorch
        tag: 22.04-py3
      command: [sh, -c]
      args: [
            "cd /localvol && echo $PWD && touch $POD_NAME.txt && ls && python -m http.server 8080
            "]
      env:
      - name: POD_NAME
        valueFrom:
          fieldRef:
            fieldPath: metadata.name
      ports:
        - containerPort: 8080
          name: http
      volumeMounts:
        - name: localvol
          mountPath: /localvol

  - name: svc
    type: ucf.k8s.service
    parameters:
      ports:
      - port: 8080
        protocol: TCP
        name: http

  - name: localvol
    type: ucf.k8s.volume
    parameters:
      persistentVolumeClaim:
        claimName: local-path-pvc

manifest 文件中有两个更改

  • 当前 pod 名称作为环境变量 POD_NAME 在容器内部公开

  • 容器 args 已更新为创建一个空文件 <pod-name>.txt

更新 Curl 客户端微服务 curl-client 的微服务 manifest,如下所示

type: msapplication
specVersion: 2.5.0
name: ucf.svc.curl-client
chartName: curl-client
description: Curl Client
version: 0.0.2
tags: []
keywords: []
publish: false

egress-endpoints:
  - name: http
    description: REST API endpoint
    protocol: TCP
    scheme: http
    mandatory: False
    data-flow: in-out

---
spec:
  - name: curl-client-deployment
    type: ucf.k8s.app.deployment
    parameters:
      apptype: stateless

  - name: curl-client-container
    type: ucf.k8s.container
    parameters:
      image:
        repository: nvcr.io/nvidia/pytorch
        tag: 22.04-py3
      command: [sh, -c]
      args: [
            "while true; do sleep 10 && curl $egress.http.address:$egress.http.port; done
            "]

manifest 文件中有一个更改

  • 容器参数已更新为每 10 秒定期调用 HTTP API

app.yaml 文件中,更新如下

specVersion: 2.5.0
version: 0.0.3
doc: README.md
name: server-client-app
description: Server Client Application

dependencies:
- ucf.svc.curl-client:0.0.2
- ucf.svc.http-server:0.0.3

components:
- name: client
  type: ucf.svc.curl-client
- name: http-server
  type: ucf.svc.http-server

connections:
    client/http: http-server/http

http-server、curl-client 和应用程序的版本已更新。

构建微服务和应用程序并部署它们

按照下面提到的步骤操作,但请记住更新 app.yaml 中 dependencies 部分下 http-servercurl-client 的版本。

检查和调试微服务和应用程序

让我们验证应用程序是否已成功部署

$ microk8s kubectl get all
NAME                                                      READY   STATUS    RESTARTS   AGE
pod/curl-client-curl-client-deployment-77d5f5465d-jfw47   1/1     Running   0          11s
pod/http-server-http-server-deployment-6bb7754dfc-kw2dh   1/1     Running   0          11s

NAME                                             TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
service/kubernetes                               ClusterIP   10.152.183.1    <none>        443/TCP    49d
service/http-server-http-server-deployment-svc   ClusterIP   10.152.183.80   <none>        8080/TCP   11s

NAME                                                 READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/curl-client-curl-client-deployment   1/1     1            1           11s
deployment.apps/http-server-http-server-deployment   1/1     1            1           11s

NAME                                                            DESIRED   CURRENT   READY   AGE
replicaset.apps/curl-client-curl-client-deployment-77d5f5465d   1         1         1       11s
replicaset.apps/http-server-http-server-deployment-6bb7754dfc   1         1         1       11s

我们可以检查 http-server 容器,以查看是否在当前工作目录(在本例中为 /localvol 文件夹)中创建了一个名为 <pod-name>.txt 的文件。

$ microk8s kubectl logs --tail -1 -l "app=http-server-http-server-deployment"
/localvol
http-server-http-server-deployment-6bb7754dfc-kw2dh.txt
somefile.txt

我们还可以获取日志以验证文件是否在挂载的卷中创建

$ microk8s kubectl logs --tail -1 -l "app=curl-client-curl-client-deployment"
...
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   481  100   481    0     0   156k      0 --:--:-- --:--:-- --:--:--  156k
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd">
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
<title>Directory listing for /</title>
</head>
<body>
<h1>Directory listing for /</h1>
<hr>
<ul>
<li><a href="http-server-http-server-deployment-6bb7754dfc-kw2dh.txt">http-server-http-server-deployment-6bb7754dfc-kw2dh.txt</a></li>
<li><a href="somefile.txt">somefile.txt</a></li>
</ul>
<hr>
</body>
</html>

正如我们所见,http-server-http-server-deployment-6bb7754dfc-kw2dh.txt 出现在客户端中,这意味着该文件确实在挂载的文件路径中创建。

现在让我们手动将 pod 副本扩展到 3 个

$ microk8s kubectl scale deploy/http-server-http-server-deployment --replicas=3

让我们确认是否有 2 个新的 pod 副本

$ microk8s kubectl get all
NAME                                                      READY   STATUS    RESTARTS   AGE
pod/curl-client-curl-client-deployment-77d5f5465d-jfw47   1/1     Running   0          11m
pod/http-server-http-server-deployment-6bb7754dfc-kw2dh   1/1     Running   0          11m
pod/http-server-http-server-deployment-6bb7754dfc-q68wq   1/1     Running   0          19s
pod/http-server-http-server-deployment-6bb7754dfc-xdb6p   1/1     Running   0          19s

NAME                                             TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
service/kubernetes                               ClusterIP   10.152.183.1    <none>        443/TCP    49d
service/http-server-http-server-deployment-svc   ClusterIP   10.152.183.80   <none>        8080/TCP   11m

NAME                                                 READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/curl-client-curl-client-deployment   1/1     1            1           11m
deployment.apps/http-server-http-server-deployment   3/3     3            3           11m

NAME                                                            DESIRED   CURRENT   READY   AGE
replicaset.apps/curl-client-curl-client-deployment-77d5f5465d   1         1         1       11m
replicaset.apps/http-server-http-server-deployment-6bb7754dfc   3         3         3       11m

让我们检查 curl-client pod 的日志。我们必须等待所有 http-server pod 进入 RUNNING 状态,并额外等待 10 秒钟,以便 curl 客户端 pod 执行 curl 命令。

$ microk8s kubectl logs --tail -1 -l "app=curl-client-curl-client-deployment"
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   751  100   751    0     0   183k      0 --:--:-- --:--:-- --:--:--  183k
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd">
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
<title>Directory listing for /</title>
</head>
<body>
<h1>Directory listing for /</h1>
<hr>
<ul>
<li><a href="http-server-http-server-deployment-6bb7754dfc-kw2dh.txt">http-server-http-server-deployment-6bb7754dfc-kw2dh.txt</a></li>
<li><a href="http-server-http-server-deployment-6bb7754dfc-q68wq.txt">http-server-http-server-deployment-6bb7754dfc-q68wq.txt</a></li>
<li><a href="http-server-http-server-deployment-6bb7754dfc-xdb6p.txt">http-server-http-server-deployment-6bb7754dfc-xdb6p.txt</a></li>
<li><a href="somefile.txt">somefile.txt</a></li>
</ul>
<hr>
</body>
</html>

正如我们所见,每个 http-server pod 副本都有一个对应的文件。这证实了相同的存储 (PVC) 已挂载到所有 pod 副本中。

停止和清理微服务和应用程序

最后,要停止和清理应用程序,我们可以运行

$ microk8s helm3 uninstall server-client

一些注意事项

  • 微服务必须自行处理对共享存储的读/写访问争用。Kubernetes 不提供任何内置机制。

  • 本地路径 Provisioner 创建的持久卷只能由在同一节点上运行的 pod 共享

    • 挂载具有 mdx-local-path storageClass 的 PVC 的 Pod 副本永远不会在不同的节点上调度,因为 backing Persistent Volume 的 nodeAffinity

    • 如果创建 PV 的节点没有足够的资源来启动更多 pod,这可能会导致 pod 副本根本无法调度。

  • 要在调度在不同节点上的 pod 副本之间共享存储,请使用允许跨节点挂载 PV 的其他 PV provisioner。