All Products
Search
Document Center

Container Service for Kubernetes:Workflow persistence

Last Updated:Feb 28, 2026

Argo Workflows periodically deletes workflow-related resources, including running status records. To retain workflow history for analysis and tracing after workflows or their pods are deleted, persist the data to an external database.

This topic describes how to configure an ApsaraDB RDS for MySQL instance as the persistence backend for Argo Workflows running in an ACK cluster.

Prerequisites

Before you begin, ensure that you have:

  • An ACK cluster with Argo Workflows installed in the argo namespace

  • Permissions to create Secrets and modify ConfigMaps in the argo namespace

Step 1: Set up the ApsaraDB RDS for MySQL database

  1. Create an ApsaraDB RDS for MySQL instance. For detailed steps, see Create an RDS MySQL instance.

  2. Create a database in the instance and configure a database account.

  3. Configure the virtual private cloud (VPC) for the instance. The VPC must match the one used by your ACK cluster.

    Important

    Add the CIDR block of this VPC to the IP whitelist of the RDS instance. Otherwise, the Argo Workflow controller cannot connect to the database.

  4. Record the database endpoint, database name, username, and password for use in the next step.

Note

For RDS billing details, see Billable items.

Step 2: Create a Secret for database credentials

In the argo namespace, create a Secret named argo-mysql-config to store the database credentials:

apiVersion: v1
kind: Secret
metadata:
  name: argo-mysql-config
  namespace: argo
type: Opaque
stringData:
  username: <your-database-username>  # Replace with your RDS database account
  password: <your-database-password>  # Replace with your RDS database password

Apply the Secret:

kubectl apply -f argo-mysql-secret.yaml

Step 3: Add persistence configuration to the ConfigMap

Modify workflow-controller-configmap in the argo namespace. Add the following persistence configuration in the data field:

data:
  persistence: |
    connectionPool:
      maxIdleConns: 100
      maxOpenConns: 0
      connMaxLifetime: 0s     # 0 means connections don't have a max lifetime
    archiveTTL: 30d
    archive: true
    mysql:
      host: rm-xxx.mysql.rds.aliyuncs.com
      port: 3306
      database: argo-workflow
      tableName: argo_workflows
      userNameSecret:
        name: argo-mysql-config
        key: username
      passwordSecret:
        name: argo-mysql-config
        key: password

Key parameters

ParameterDescription
hostThe endpoint of your ApsaraDB RDS for MySQL instance
portThe port of your RDS instance. The standard MySQL port is 3306
databaseThe name of the database you created in Step 1
tableNameRequired. The name of the table for storing workflow data
archiveEnables workflow persistence. Set to true
archiveTTLThe retention period for archived workflows. No maximum value. Example: 30d retains data for 30 days
maxIdleConnsMaximum number of idle connections in the connection pool
maxOpenConnsMaximum number of open connections. 0 means unlimited
connMaxLifetimeMaximum lifetime of a connection. 0s means no limit

Step 4: Restart Argo components

Argo core components do not automatically detect ConfigMap changes. Restart both the Argo Workflow controller and Argo Server to apply the persistence configuration:

kubectl rollout restart deployment workflow-controller -n argo
kubectl rollout restart deployment argo-server -n argo

Verify the configuration

After restarting the components, verify that the persistence configuration is applied:

  1. Check the Argo Workflow controller logs for database connection messages:

       kubectl logs deployment/workflow-controller -n argo | grep -i persist
  2. Run a test workflow. After the workflow completes, verify that the workflow data is stored in the RDS database.

Billing

Setting up workflow persistence incurs the following additional costs:

  • ApsaraDB RDS for MySQL: Instance fees based on the selected specification and storage capacity. For details, see Billable items.