All Products
Search
Document Center

CloudFlow:Use Terraform to configure workflow scheduling

Last Updated:Aug 06, 2025

Terraform is an open source tool that you can use to preview, configure, and manage cloud infrastructure and resources in a secure and efficient manner. You can use the EventBridge console to schedule CloudFlow workflows but cannot call EventBridge API operations to schedule CloudFlow workflows. You cannot directly use Terraform to schedule CloudFlow workflows. If you want to use Terraform to schedule CloudFlow workflows, you can use Terraform to create EventBridge resources.

Note

You can run the sample code in this topic with a few clicks.

Prerequisites

  • CloudFlow is activated.

  • EventBridge is activated.

  • An Alibaba Cloud account has all permissions on resources within the account. If an Alibaba Cloud account is leaked, the resources are exposed to major risks. We recommend that you use a RAM user and create an AccessKey pair for the RAM user. For more information, see Create a RAM user and Create an AccessKey pair.

  • The AliyunFnFFullAccess policy that allows the RAM user to manage CloudFlow resources is attached to the RAM user. For more information, see Grant permissions to a RAM user.

  • The runtime environment for Terraform is prepared by using one of the following methods:

    • Use Terraform in Terraform Explorer: Alibaba Cloud provides Terraform Explorer, an online runtime environment for Terraform. You can use Terraform after you log on to Terraform Explorer without the need to install Terraform. This method is suitable for scenarios in which you want to use and debug Terraform in a fast and convenient manner at no additional costs.

    • Use Terraform in Cloud Shell: Terraform is preinstalled in Cloud Shell and identity credentials are configured. You can directly run Terraform commands in Cloud Shell. This method is suitable for scenarios in which you want to use and debug Terraform in a fast and convenient manner at low costs.

    • Install and configure Terraform on your on-premises machine: This method is suitable for scenarios in which network connections are unstable or a custom development environment is required.

Important

You must install Terraform 0.12.28 or later. You can run the terraform --version command to query the Terraform version.

Required resources

Use Terraform to configure workflow scheduling

  1. Create a working directory and a configuration file named main.tf in the directory. main.tf is the main file of Terraform and defines the resources that you want to deploy.

    variable "region" {
      default = "cn-hangzhou"
    }
    provider "alicloud" {
      region = var.region
    }
    # The name of the variable. 
    variable "name" {
      default = "test-mns"
    }
    # The name of the policy. 
    variable "policy_name" {
      type = string
      description = "The name of the policy."
      default = "test-policy"
    }
    # The name of the role.
    variable "role_name" {
      type = string
      description = "The role for eb to start execution of flow."
      default = "eb-to-fnf-role"
    }
    # The name of the flow.
    variable "flow_name" {
      type = string
      description = "The name of the flow."
      default = "test-flow"
    }
    # The description of the flow.
    variable "flow_description" {
      default = "For flow_description"
    }
    # The name of the bus. 
    variable "event_bus_name" {
      type = string
      description = "The name of the event bus."
      default = "test-eventbus1"
    }
    # The description of the bus. 
    variable "event_bus_description" {
      default = "For event_bus_description"
    }
    # The code name of the event source.
    variable "event_source_name" {
      type = string
      description = "The name of the event source."
      default = "test-eventsource1"
    }
    # The name of the event rule.
    variable "event_rule_name" {
        type = string
        description = "The name of the event rule."
        default = "test-eventrule1"
    }
    # The ID of the custom event target.
    variable "target_id" {
        type = string
        description = "The ID of the target."
        default = "test-target1"
    }
    # Obtain the ID of the current Alibaba Cloud account.
    data "alicloud_account" "current" {
    }
    # Create a random number.
    resource "random_integer" "default" {
      min = 10000
      max = 99999
    }
    # Create a RAM policy to define permissions. 
    resource "alicloud_ram_policy" "policy_exmaple" {
      # The name of the RAM policy. 
      policy_name     = "${var.policy_name}-${random_integer.default.result}"
      # Optional. This parameter is used to destroy resources. Default value: false. 
      force           = true 
      # The document of the RAM policy.
       policy_document = <<EOF
      {
        "Statement": [
          {
            "Action": [
              "fnf:*",        
              "mns:*",         
              "eventbridge:*",       
              "ram:*"  
            ],
            "Effect": "Allow",
            "Resource": [
              "*"
            ]
          }
        ],
          "Version": "1"
      }
      EOF
    }
    # Create a RAM role. 
    resource "alicloud_ram_role" "role_example" {
      # The name of the RAM role.
      name     = var.role_name
      # Optional. This parameter is used to destroy resources. Default value: false. 
      force           = true 
      # The document of the role policy.
      document = <<EOF
      {
        "Statement": [
          {
            "Action": "sts:AssumeRole",
            "Effect": "Allow",
            "Principal": {
              "Service": [
                "fnf.aliyuncs.com"
              ]
            }
          }
        ],
        "Version": "1"
      }
      EOF
    }
    # Attach the RAM policy to the RAM role to grant the permissions in the policy to the RAM role. 
    resource "alicloud_ram_role_policy_attachment" "attach_example" {
      # The name of the RAM policy.
      policy_name = alicloud_ram_policy.policy_exmaple.policy_name
      # The type of the RAM policy.
      policy_type = alicloud_ram_policy.policy_exmaple.type
      # The name of the RAM role.
      role_name   = alicloud_ram_role.role_example.name
    }
    # Create a CloudFlow flow.
    resource "alicloud_fnf_flow" "flow_example" {
      depends_on = [alicloud_ram_role_policy_attachment.attach_example]
      # Required. The definition of the flow. The definition must conform to the syntax of the Flow Definition Language (FDL). 
      definition  = <<EOF
      Type: StateMachine
      Name: ${var.flow_name}
      SpecVersion: v1
      StartAt: Hello World
      States:
      - Type: Pass
        Name: Hello World
        End: true
      EOF
      # The ARN of the RAM role specified by CloudFlow to execute the flow. 
      role_arn    = alicloud_ram_role.role_example.arn
      # The description of the flow.
      description = var.flow_description
      # The name of the flow.
      name        = var.flow_name
      # The type of the flow. Valid values: FDL and DEFAULT. 
      type        = "FDL"
    }
    # Create an event bus to receive and route events. 
    resource "alicloud_event_bridge_event_bus" "eventbus_example" {
      # The name of the event bus.
      event_bus_name = var.event_bus_name
      # Optional. The description of the event bus.
      description = var.event_bus_description
    }
    # Create a Simple Message Queue (SMQ, former MNS) queue.
    resource "alicloud_mns_queue" "example" {
      # Name.
      name = "${var.name}-${random_integer.default.result}"
      # The duration (unit: seconds) that each message sent to the queue is delayed after the message is dequeued.
      delay_seconds            = 0
      # The maximum size of a message body that can be sent to the SMQ queue.
      maximum_message_size     = 65536
       The effective period of time (unit: seconds) for which a message is retained in the queue. 
      message_retention_period = 345600
      # The timeout period of the queue visibility. 
      visibility_timeout       = 30
      # The interval of long polling. Unit: seconds.
      polling_wait_seconds     = 0
    }
    # Create an event source to generate scheduled events. 
    resource "alicloud_event_bridge_event_source" "eventsource_example" {
      # The name of the event bus. 
      event_bus_name         = alicloud_event_bridge_event_bus.eventbus_example.event_bus_name
      # The code name of the event source.
      event_source_name      = var.event_source_name
      # Optional. Specify whether to connect to an external data source. Default value: false. 
      linked_external_source = true
      # Optional. The type of the external data source. Valid values: RabbitMQ, RocketMQ, and MNS. This parameter is valid only if linked_external_source is set to true. 
      external_source_type   = "MNS"
      # Optional. A map. The configuration of the external source. 
      external_source_config = {
        QueueName = alicloud_mns_queue.example.name
      }
    }
    
    # A local variable, which is used to store the ARN of the CloudFlow workflow. 
    locals {
        flow_arn = format("acs:fnf:::flow/%s", var.flow_name)
    }
    
    # Create an event rule to match events generated by the event source and route the events to the specified CloudFlow workflow. 
    resource "alicloud_event_bridge_rule" "eventrule_example" {
      # The name of the event bus. 
      event_bus_name = alicloud_event_bridge_event_bus.eventbus_example.event_bus_name
      # The name of the event rule.
      rule_name      = var.event_rule_name
      # Match the pattern of the event of interest. The event pattern in JSON format. Valid values: stringEqual mode and stringExpression mode. 
      filter_pattern = format("{\"source\":[\"%s\"]}", var.event_source_name)
      # The target of the rule.
      targets {
        # The ID of the custom event target. 
        target_id = var.target_id
        # The endpoint of the event target. 
        endpoint  = local.flow_arn
        # The type of the event target. 
        type      = "acs.fnf"
        param_list {
          resource_key = "Input"
          form         = "ORIGINAL"
        }
        param_list {
          form         = "CONSTANT"
          resource_key = "FlowName"
          value        = var.flow_name
        }
        param_list {
          form         = "CONSTANT"
          resource_key = "RoleName"
          value        = var.role_name
        }
      }
    }
  2. Run the following command to initialize the runtime environment of Terraform.

    terraform init

    If the following information is returned, Terraform is initialized.

    Initializing the backend...
    
    Initializing provider plugins...
    - Finding latest version of hashicorp/alicloud...
    - Installing hashicorp/alicloud v1.234.0...
    - Installed hashicorp/alicloud v1.234.0 (signed by HashiCorp)
    
    Terraform has created a lock file .terraform.lock.hcl to record the provider
    selections it made above. Include this file in your version control repository
    so that Terraform can guarantee to make the same selections by default when
    you run "terraform init" in the future.
    
    Terraform has been successfully initialized!
    
    You may now begin working with Terraform. Try running "terraform plan" to see
    any changes that are required for your infrastructure. All Terraform commands
    should now work.
    
    If you ever set or change modules or backend configuration for Terraform,
    rerun this command to reinitialize your working directory. If you forget, other
    commands will detect it and remind you to do so if necessary.
  3. Create an execution plan and preview the changes.

    terraform plan
  4. Run the following command to create workflow scheduling.

    terraform apply

    When Terraform asks for your confirmation, enter yes and press Enter. Wait for the command to complete. If the following information is returned, the workflow scheduling is created:

    Plan: 9 to add, 0 to change, 0 to destroy.
    
    Do you want to perform these actions?
      Terraform will perform the actions described above.
      Only 'yes' will be accepted to approve.
    
      Enter a value: yes
    
    random_integer.default: Creating...
    random_integer.default: Creation complete after 0s [id=10***]
    alicloud_ram_policy.policy_exmaple: Creating...
    alicloud_ram_role.role_example: Creating...
    alicloud_mns_queue.example: Creating...
    alicloud_event_bridge_event_bus.eventbus_example: Creating...
    alicloud_mns_queue.example: Creation complete after 0s [id=test-mns-10***]
    alicloud_ram_policy.policy_exmaple: Creation complete after 1s [id=test-policy-10***]
    alicloud_ram_role.role_example: Creation complete after 1s [id=eb-to-fnf-r***]
    alicloud_ram_role_policy_attachment.attach_example: Creating...
    alicloud_ram_role_policy_attachment.attach_example: Creation complete after 0s [id=role:test-policy-10486:Custom:eb-to-f***]
    alicloud_fnf_flow.flow_example: Creating...
    alicloud_fnf_flow.flow_example: Creation complete after 0s [id=test-f***]
    alicloud_event_bridge_event_bus.eventbus_example: Creation complete after 2s [id=test-event***]
    alicloud_event_bridge_rule.eventrule_example: Creating...
    alicloud_event_bridge_event_source.eventsource_example: Creating...
    alicloud_event_bridge_event_source.eventsource_example: Creation complete after 0s [id=test-eventsour***]
    alicloud_event_bridge_rule.eventrule_example: Creation complete after 0s [id=test-eventbus1:test-event***]
    
    Apply complete!  Resources: 9 added, 0 changed, 0 destroyed.
  5. Verify whether the workflow scheduling is created.

    Run the terraform show command

    Run the following command to query the resources that are created by Terraform:

    terraform show

    image

    Use the CloudFlow console

    After the workflow scheduling is created, you can call API operations, use SDKs, or log on to the CloudFlow console to check whether the creation operation is completed.image

Release resources

If you no longer need the preceding resources that are created or managed by using Terraform, run the following command to release the resources. For more information about the terraform destroy command, see Common commands.

terraform destroy

Sample code

Note

You can run the sample code with a few clicks.

Sample code

variable "region" {
  default = "cn-hangzhou"
}
provider "alicloud" {
  region = var.region
}
# The name of the variable. 
variable "name" {
  default = "test-mns"
}
# The name of the policy. 
variable "policy_name" {
  type = string
  description = "The name of the policy."
  default = "test-policy"
}
# The name of the role.
variable "role_name" {
  type = string
  description = "The role for eb to start execution of flow."
  default = "eb-to-fnf-role"
}
# The name of the CloudFlow workflow.
variable "flow_name" {
  type = string
  description = "The name of the flow."
  default = "test-flow"
}
# The description of the flow.
variable "flow_description" {
  default = "For flow_description"
}
# The name of the bus. 
variable "event_bus_name" {
  type = string
  description = "The name of the event bus."
  default = "test-eventbus1"
}
# The description of the bus. 
variable "event_bus_description" {
  default = "For event_bus_description"
}
# The code name of the event source.
variable "event_source_name" {
  type = string
  description = "The name of the event source."
  default = "test-eventsource1"
}
# The name of the event rule.
variable "event_rule_name" {
    type = string
    description = "The name of the event rule."
    default = "test-eventrule1"
}
# The ID of the custom event target.
variable "target_id" {
    type = string
    description = "The ID of the target."
    default = "test-target1"
}
# Obtain the ID of the current Alibaba Cloud account.
data "alicloud_account" "current" {
}
# Create a random number.
resource "random_integer" "default" {
  min = 10000
  max = 99999
}
# Create a RAM policy to define permissions. 
resource "alicloud_ram_policy" "policy_exmaple" {
  # The name of the RAM policy. 
  policy_name     = "${var.policy_name}-${random_integer.default.result}"
  # Optional. This parameter is used to destroy resources. Default value: false. 
  force           = true 
  # The document of the RAM policy.
   policy_document = <<EOF
  {
    "Statement": [
      {
        "Action": [
          "fnf:*",        
          "mns:*",         
          "eventbridge:*",       
          "ram:*"  
        ],
        "Effect": "Allow",
        "Resource": [
          "*"
        ]
      }
    ],
      "Version": "1"
  }
  EOF
}
# Create a RAM role. 
resource "alicloud_ram_role" "role_example" {
  # The name of the RAM role.
  name     = var.role_name
  # Optional. This parameter is used to destroy resources. Default value: false. 
  force           = true 
  # The document of the role policy.
  document = <<EOF
  {
    "Statement": [
      {
        "Action": "sts:AssumeRole",
        "Effect": "Allow",
        "Principal": {
          "Service": [
            "fnf.aliyuncs.com"
          ]
        }
      }
    ],
    "Version": "1"
  }
  EOF
}
# Attach the RAM policy to the RAM role to grant the permissions in the policy to the RAM role. 
resource "alicloud_ram_role_policy_attachment" "attach_example" {
  # The name of the RAM policy.
  policy_name = alicloud_ram_policy.policy_exmaple.policy_name
  # The type of the RAM policy.
  policy_type = alicloud_ram_policy.policy_exmaple.type
  # The name of the RAM role.
  role_name   = alicloud_ram_role.role_example.name
}
# Create a CloudFlow workflow.
resource "alicloud_fnf_flow" "flow_example" {
  depends_on = [alicloud_ram_role_policy_attachment.attach_example]
  # Required. The definition of the flow. The definition must conform to the syntax of the FDL. 
  definition  = <<EOF
  Type: StateMachine
  Name: ${var.flow_name}
  SpecVersion: v1
  StartAt: Hello World
  States:
  - Type: Pass
    Name: Hello World
    End: true
  EOF
  # The ARN of the RAM role specified by CloudFlow to execute the flow. 
  role_arn    = alicloud_ram_role.role_example.arn
  # The description of the flow.
  description = var.flow_description
  # The name of the flow.
  name        = var.flow_name
  # The type of the flow. Valid values: FDL and DEFAULT. 
  type        = "FDL"
}
# Create an event bus to receive and route events. 
resource "alicloud_event_bridge_event_bus" "eventbus_example" {
  # The name of the event bus.
  event_bus_name = var.event_bus_name
  # Optional. The description of the event bus.
  description = var.event_bus_description
}
# Create an SMQ queue.
resource "alicloud_mns_queue" "example" {
  # Name.
  name = "${var.name}-${random_integer.default.result}"
  # The duration (unit: seconds) that each message sent to the queue is delayed after the message is dequeued.
  delay_seconds            = 0
  # The maximum size of a message body that can be sent to the SMQ queue.
  maximum_message_size     = 65536
   The effective period of time (unit: seconds) for which a message is retained in the queue. 
  message_retention_period = 345600
  # The timeout period of the queue visibility. 
  visibility_timeout       = 30
  # The interval of long polling. Unit: seconds.
  polling_wait_seconds     = 0
}
# Create an event source to generate scheduled events. 
resource "alicloud_event_bridge_event_source" "eventsource_example" {
  # The name of the event bus. 
  event_bus_name         = alicloud_event_bridge_event_bus.eventbus_example.event_bus_name
  # The code name of the event source.
  event_source_name      = var.event_source_name
  # Optional. Specify whether to connect to an external data source. Default value: false. 
  linked_external_source = true
  # Optional. The type of the external data source. Valid values: RabbitMQ, RocketMQ, and MNS. This parameter is valid only if linked_external_source is set to true. 
  external_source_type   = "MNS"
  # Optional. A map. The configuration of the external source. 
  external_source_config = {
    QueueName = alicloud_mns_queue.example.name
  }
}

# A local variable, which is used to store the ARN of the CloudFlow workflow. 
locals {
    flow_arn = format("acs:fnf:::flow/%s", var.flow_name)
}

# Create an event rule to match events generated by the event source and route the events to the specified CloudFlow workflow. 
resource "alicloud_event_bridge_rule" "eventrule_example" {
  # The name of the event bus. 
  event_bus_name = alicloud_event_bridge_event_bus.eventbus_example.event_bus_name
  # The name of the event rule.
  rule_name      = var.event_rule_name
  # Match the pattern of the event of interest. The event pattern in JSON format. Valid values: stringEqual mode and stringExpression mode. 
  filter_pattern = format("{\"source\":[\"%s\"]}", var.event_source_name)
  # The target of the rule.
  targets {
    # The ID of the custom event target. 
    target_id = var.target_id
    # The endpoint of the event target. 
    endpoint  = local.flow_arn
    # The type of the event target. 
    type      = "acs.fnf"
    param_list {
      resource_key = "Input"
      form         = "ORIGINAL"
    }
    param_list {
      form         = "CONSTANT"
      resource_key = "FlowName"
      value        = var.flow_name
    }
    param_list {
      form         = "CONSTANT"
      resource_key = "RoleName"
      value        = var.role_name
    }
  }
}

If you want to view more sample code, visit the directory of the corresponding service on the Quickstarts page.