全部产品
Search
文档中心

对象存储 OSS:在Kubernetes中启用Connector功能

更新时间:Mar 31, 2026

本文介绍如何在Kubernetes部署中安装和使用OSS Connector for AI/ML组件,实现模型数据从OSS加载的推理服务部署。

安装Connector组件

要在Kubernetes部署中使用Connector功能,需要在业务容器中安装Connector组件。根据使用场景,选择以下任一方式安装。

方式一:Init Container安装

在业务容器启动前,通过Init Container下载并解压Connector组件到共享目录。

initContainers:
  # Init Container:在业务容器启动前运行,用于准备依赖文件
  - name: install-connector
    image: busybox
    command:
      - sh
      - -c
      - |
        # 下载 Connector 的 deb 安装包
        wget -q https://gosspublic.alicdn.com/oss-connector/oss-connector-lib-1.2.0.x86_64.deb -O /tmp/connector.deb
        # 创建临时目录并解压 deb 包
        mkdir -p /tmp/extract && cd /tmp/extract
        ar x /tmp/connector.deb
        # 仅提取需要的 so 文件到共享目录
        # 共享目录 /shared 通过 volumeMounts 挂载到主容器可见的卷
        mkdir -p /shared/usr/local/lib
        tar -xf data.tar.gz -O ./usr/local/lib/libossc_preload.so > /shared/usr/local/lib/libossc_preload.so
    volumeMounts:
      # 将名为 connector-lib 的卷挂载到 /shared
      # 主容器也会挂载同一个卷,实现文件共享
      - name: connector-lib
        mountPath: /shared

containers:
  - name: vllm
    image: vllm/vllm-openai:latest
    volumeMounts:
      # 将 connector-lib 卷挂载到容器的 /usr/local/lib 目录
      # 这样 Init Container 提取的 so 文件就在主容器的 /usr/local/lib 下可用
      - name: connector-lib
        mountPath: /usr/local/lib
        subPath: usr/local/lib

volumes:
  # 定义共享卷,用于 Init Container 和主容器之间传递文件
  - name: connector-lib
    emptyDir: {}

方式二:启动时动态安装

在容器启动命令中下载并安装Connector组件。

containers:
  - name: vllm
    image: vllm/vllm-openai:latest
    command: ["/bin/bash", "-c"]
    args:
      - |
        # 下载并安装 Connector 的 deb 安装包
        wget https://gosspublic.alicdn.com/oss-connector/oss-connector-lib-1.2.0.x86_64.deb
        dpkg -i oss-connector-lib-1.2.0.x86_64.deb
        # 启动模型服务
        ENABLE_CONNECTOR=1 python3 -m vllm.entrypoints.openai.api_server --model ${MODEL_DIR} ...

方式三:自定义Dockerfile

以vLLM官方镜像vllm/vllm-openai为例,构建预装Connector组件的自定义镜像。

FROM vllm/vllm-openai:latest
RUN wget https://gosspublic.alicdn.com/oss-connector/oss-connector-lib-1.2.0.x86_64.deb && \
    dpkg -i oss-connector-lib-1.2.0.x86_64.deb

构建并推送镜像:

docker build -t myregistry/vllm-with-connector:latest .
docker push myregistry/vllm-with-connector:latest

使用安装了Connector组件的镜像启动:

containers:
  - name: vllm
    image: myregistry/vllm-with-connector:latest

安装方式对比

安装方式

适用场景

优点

缺点

Init Container安装/启动时动态安装

临时测试、快速验证、开发调试、不想修改基础镜像

无需自定义镜像,部署灵活,配置简单,随用随装

每次启动都需下载解压,启动时间较长,依赖外部网络

自定义Dockerfile

生产环境、长期运行的服务、大规模集群

启动最快,镜像自包含,稳定性高,可复用

需要维护自定义镜像,绑定版本

部署模型推理服务

以vLLM官方镜像vllm/vllm-openai为例,通过Init Container安装Connector组件,实现模型数据从OSS加载的推理服务部署。

# ConfigMap:存储 Connector 组件的配置文件
# 作用:将 Connector 的配置以文件形式挂载到容器内,供其读取
# 挂载路径:/etc/oss-connector/config.json
apiVersion: v1
kind: ConfigMap
metadata:
  name: connector-config
data:
  config.json: |
    {
        "logLevel": 1,
        "logPath": "/var/log/oss-connector/connector.log",
        "auditPath": "/var/log/oss-connector/audit.log",
        "expireTimeSec": 120,
        "prefetch": {
            "vcpus": 16,
            "workers": 16
        }
    }
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: model-connector-deployment
spec:
  selector:
    matchLabels:
      app: model-connector
  template:
    metadata:
      labels:
        app: model-connector
    spec:
      # Init Container:在业务容器启动前运行,用于准备依赖文件
      initContainers:
        - name: install-connector
          image: busybox
          command: ["/bin/sh", "-c"]
          args:
            - |
              # 下载 Connector 的 deb 安装包
              wget -q https://gosspublic.alicdn.com/oss-connector/oss-connector-lib-1.2.0.x86_64.deb -O /tmp/connector.deb

              # 创建临时目录并解压 deb 包
              mkdir -p /tmp/extract && cd /tmp/extract
              ar x /tmp/connector.deb

              # 仅提取需要的 so 文件到共享目录
              # 共享目录 /shared 通过 volumeMounts 挂载到主容器可见的卷
              mkdir -p /shared/usr/local/lib
              tar -xf data.tar.gz -O ./usr/local/lib/libossc_preload.so > /shared/usr/local/lib/libossc_preload.so

          volumeMounts:
            # 将名为 connector-lib 的卷挂载到 /shared
            # 主容器也会挂载同一个卷,实现文件共享
            - name: connector-lib
              mountPath: /shared
      containers:
      - name: vllm
        image: vllm/vllm-openai:latest
        imagePullPolicy: IfNotPresent
        resources:
          requests:
            cpu: "16"
            memory: "70Gi"
          limits:
            cpu: "20"
            memory: "80Gi"
        command: ["/bin/bash", "-c"]
        args:
          - |
            # 将 Connector 库添加到 LD_PRELOAD, 拦截文件系统调用
            export LD_PRELOAD="/usr/local/lib/libossc_preload.so${LD_PRELOAD:+:$LD_PRELOAD}"
            # 启动 vLLM 服务
            # ENABLE_CONNECTOR=1 启用 OSS Connector 加速模型加载
            # 模型路径 ${MODEL_DIR}/qwen/Qwen3-8B/ 实际指向 OSS 路径,由 Connector 拦截并重定向
            ENABLE_CONNECTOR=1 python3 -m vllm.entrypoints.openai.api_server \
              --model ${MODEL_DIR}/qwen/Qwen3-8B/ \
              --trust-remote-code \
              --tensor-parallel-size 1 \
              --disable-custom-all-reduce
        env:
        # OSS 访问配置:指定内网 Endpoint 和 Region
        - name: OSS_ENDPOINT
          value: "oss-cn-beijing-internal.aliyuncs.com"
        - name: OSS_REGION
          value: "cn-beijing"

        # 映射 OSS 上的根路径,Connector 会将本地路径映射到此 OSS 路径
        - name: OSS_PATH
          value: "oss://examplebucket/"

        # 本地映射目录(被 Connector 拦截,实际从 OSS 加载)
        - name: MODEL_DIR
          value: "/var/model"

        # 从 Secret(oss-access-key-connector) 中读取 OSS 访问凭证
        # 需提前创建:kubectl create secret generic oss-access-key-connector \
        #   --from-literal=key=<OSS_ACCESS_KEY_ID> \
        #   --from-literal=secret=<OSS_ACCESS_KEY_SECRET>
        - name: OSS_ACCESS_KEY_ID
          valueFrom:
            secretKeyRef:
              name: oss-access-key-connector
              key: key
        - name: OSS_ACCESS_KEY_SECRET
          valueFrom:
            secretKeyRef:
              name: oss-access-key-connector
              key: secret
        volumeMounts:
          # Connector 配置文件挂载
          - name: connector-config
            mountPath:  /etc/oss-connector/
          # 将 connector-lib 卷挂载到容器的 /usr/local/lib 目录
          # 这样 Init Container 提取的 so 文件就在主容器的 /usr/local/lib 下可用
          - name: connector-lib
            mountPath: /usr/local/lib
            subPath: usr/local/lib
      terminationGracePeriodSeconds: 10
      volumes:
      # Connector 配置 ConfigMap
      - name: connector-config
        configMap:
          name: connector-config
      # 定义共享卷,用于 Init Container 和主容器之间传递文件
      - name: connector-lib
        emptyDir: {}

部署多实例模型广播服务

以预装Connector组件的镜像myregistry/vllm-with-connector为例,开启多副本实例间的模型广播。关于模型广播的详细介绍,请参见模型广播

# ConfigMap:存储 Connector 组件的配置模板文件
# 作用:以模板形式(config.json.tmpl)提供配置,由 init container 渲染后供主容器读取
# 模板挂载路径(init container):/tmpl/config.json.tmpl
# 渲染输出路径(主容器):/etc/oss-connector/config.json
apiVersion: v1
kind: ConfigMap
metadata:
  name: connector-config
data:
  config.json.tmpl: |
    {
        "logLevel": 1,
        "logPath": "/var/log/oss-connector/connector.log",
        "auditPath": "/var/log/oss-connector/audit.log",
        "expireTimeSec": 120,
        "prefetch": {
            "vcpus": 16,
            "workers": 16
        },
        "broadcast": {
            "enableBroadcast": true,
            "tenant": "${REDIS_TENANT}",
            "db": {
                "host": "${REDIS_HOST}",
                "port": 6379,
                "username": "${REDIS_USERNAME}",
                "password": "${REDIS_PASSWORD}"
            }
        },
        "bindPort": 19989
    }
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: model-connector-deployment
spec:
  replicas: 2
  selector:
    matchLabels:
      app: model-connector
  template:
    metadata:
      labels:
        app: model-connector
    spec:
      # init container:在主容器启动前,将配置模板渲染为最终的 config.json
      initContainers:
      - name: render-config
        image: busybox
        command: ["/bin/sh", "-c"]
        args:
          # 用 sed 逐一替换模板中的占位符为实际环境变量值
          - |
            sed -e "s|\${REDIS_HOST}|$REDIS_HOST|g" \
                -e "s|\${REDIS_USERNAME}|$REDIS_USERNAME|g" \
                -e "s|\${REDIS_PASSWORD}|$REDIS_PASSWORD|g" \
                -e "s|\${REDIS_TENANT}|$REDIS_TENANT|g" \
                /tmpl/config.json.tmpl > /etc/oss-connector/config.json
        env:
        # 从 Secret(redis-secret) 中读取 Redis 连接信息
        # 需提前创建:kubectl create secret generic redis-secret \
        #   --from-literal=host=<host> \
        #   --from-literal=username=<username> \
        #   --from-literal=password=<password>
        - name: REDIS_HOST
          valueFrom:
            secretKeyRef:
              name: redis-secret
              key: host
        - name: REDIS_USERNAME
          valueFrom:
            secretKeyRef:
              name: redis-secret
              key: username
        - name: REDIS_PASSWORD
          valueFrom:
            secretKeyRef:
              name: redis-secret
              key: password
        - name: REDIS_TENANT
          value: "broadcast-demo"
        volumeMounts:
          # 挂载 ConfigMap 中的模板文件,作为 sed 的输入
          - name: connector-config
            mountPath: /tmpl
          # 挂载共享 emptyDir,将渲染结果写入此处供主容器使用
          - name: rendered-config
            mountPath: /etc/oss-connector
      containers:
      - name: vllm
        image: myregistry/vllm-with-connector:latest
        imagePullPolicy: IfNotPresent
        resources:
          requests:
            cpu: "16"
            memory: "70Gi"
          limits:
            cpu: "20"
            memory: "80Gi"
        command: ["/bin/bash", "-c"]
        args:
          - |
            # 将 Connector 库添加到 LD_PRELOAD, 拦截文件系统调用
            export LD_PRELOAD="/usr/local/lib/libossc_preload.so${LD_PRELOAD:+:$LD_PRELOAD}"
            # 启动 vLLM 服务
            # ENABLE_CONNECTOR=1 启用 OSS Connector 加速模型加载
            ENABLE_CONNECTOR=1 python3 -m vllm.entrypoints.openai.api_server \
              --model ${MODEL_DIR}/qwen/Qwen3-8B/ \
              --trust-remote-code \
              --tensor-parallel-size 1 \
              --disable-custom-all-reduce
        env:
        # OSS 访问配置:指定内网 Endpoint 和 Region
        - name: OSS_ENDPOINT
          value: "oss-cn-beijing-internal.aliyuncs.com"
        - name: OSS_REGION
          value: "cn-beijing"

        # 映射 OSS 上的根路径,Connector 会将本地路径映射到此 OSS 路径
        - name: OSS_PATH
          value: "oss://examplebucket/"

        # 本地映射目录(被 Connector 拦截,实际从 OSS 加载)
        - name: MODEL_DIR
          value: "/var/model"

        # 从 Secret(oss-access-key-connector) 中读取 OSS 访问凭证
        - name: OSS_ACCESS_KEY_ID
          valueFrom:
            secretKeyRef:
              name: oss-access-key-connector
              key: key
        - name: OSS_ACCESS_KEY_SECRET
          valueFrom:
            secretKeyRef:
              name: oss-access-key-connector
              key: secret
        volumeMounts:
          # Connector 配置文件挂载(rendered by init container)
          - name: rendered-config
            mountPath: /etc/oss-connector/
      terminationGracePeriodSeconds: 10
      volumes:
      # Connector 配置模板 ConfigMap(挂载到 init container)
      - name: connector-config
        configMap:
          name: connector-config
      # init container 渲染后的配置文件(共享给主容器)
      - name: rendered-config
        emptyDir: {}