All Products
Search
Document Center

AnalyticDB:CreateApsKafkaHudiJob

Last Updated:Jan 14, 2026

Creates an APS Kafka to data lakehouse job.

Try it now

Try this API in OpenAPI Explorer, no manual signing needed. Successful calls auto-generate SDK code matching your parameters. Download it with built-in credential security for local usage.

Test

RAM authorization

The table below describes the authorization required to call this API. You can define it in a Resource Access Management (RAM) policy. The table's columns are detailed below:

  • Action: The actions can be used in the Action element of RAM permission policy statements to grant permissions to perform the operation.

  • API: The API that you can call to perform the action.

  • Access level: The predefined level of access granted for each API. Valid values: create, list, get, update, and delete.

  • Resource type: The type of the resource that supports authorization to perform the action. It indicates if the action supports resource-level permission. The specified resource must be compatible with the action. Otherwise, the policy will be ineffective.

    • For APIs with resource-level permissions, required resource types are marked with an asterisk (*). Specify the corresponding Alibaba Cloud Resource Name (ARN) in the Resource element of the policy.

    • For APIs without resource-level permissions, it is shown as All Resources. Use an asterisk (*) in the Resource element of the policy.

  • Condition key: The condition keys defined by the service. The key allows for granular control, applying to either actions alone or actions associated with specific resources. In addition to service-specific condition keys, Alibaba Cloud provides a set of common condition keys applicable across all RAM-supported services.

  • Dependent action: The dependent actions required to run the action. To complete the action, the RAM user or the RAM role must have the permissions to perform all dependent actions.

Action

Access level

Resource type

Condition key

Dependent action

adb:CreateApsKafkaHudiJob

none

*DBClusterLakeVersion

acs:adb:{#regionId}:{#accountId}:dbcluster/{#DBClusterId}

None None

Request parameters

Parameter

Type

Required

Description

Example

DBClusterId

string

Yes

The cluster ID.

Note

Call the DescribeDBClusters operation to view the cluster IDs of all AnalyticDB for MySQL Data Lakehouse Edition (V3.0) clusters in the destination region.

amv-bp11q28kvl688****

RegionId

string

Yes

The region ID.

cn-hangzhou

PartitionSpecs

array<object>

No

The partition information.

object

No

"SourceColumn": The name of the source partition field. "Strategy": The policy.

  1. Format date: "ParseAsTimeAndFormat".

  2. Specify partition field: "CustomSpecify". "SourceTypeFormat": The format of the source type. "TargetTypeFormat": The format of the destination type. "TargetColumn": The name of the destination partition field.

The following list describes the enumeration values of SourceTypeFormat and their corresponding time precision. APSLiteralTimestampMilliSecond: Millisecond precision. APSLiteralTimestampMicroSecond: Microsecond precision. APSLiteralTimestampSecond: Second precision. APSMIDyyyyMMddHHmmssSSS: yyyy-MM-dd HH:mm:ss.SSS. APSMIDyyyyMMddHHmmss: yyyy-MM-dd HH:mm:ss. APSMIDyyyyMMdd: yyyy-MM-dd. APSyyyyMMddHHmmss: yyyyMMddHHmmss. APSyyyyMMdd: yyyyMMdd. APSyyyyMM: yyyyMM. APSSLAyyyyMMdd: yyyy/MM/dd. APSSLAMMddyyyy: MM/dd/yyyy.

[{ "SourceColumn": "NetOutFlow", "Strategy": "ParseAsTimeAndFormat", "SourceTypeFormat": "APSLiteralTimestampSecond", "TargetTypeFormat": "yyyy-MM-dd", "TargetColumn": "NetOutFlow" }]

Columns

array<object>

Yes

The column information.

object

No

The column information.

Name

string

No

The name of the source field.

a

MapName

string

No

The name of the destination field.

b

Type

string

No

The type of the source field.

string

MapType

string

No

The type of the destination field.

string

PrimaryKeyDefinition

string

No

The primary key settings. This parameter supports the UUID policy and the mapping policy. The policies are described as follows. UUID policy: "Strategy": "uuid". Mapping policy: "Strategy": "mapping", "Values":[ "f1", "f2" ], "RecordVersionField","xxx" `RecordVersionField` specifies the Hudi record version.

"Strategy": "mapping"

WorkloadName

string

Yes

The name of the workload.

test

LakehouseId

integer

No

The ID of the lakehouse.

123

ResourceGroup

string

Yes

The name of the resource group.

aps

HudiAdvancedConfig

string

No

The Hudi configuration for the destination.

hoodie.keep.min.commits=20

AdvancedConfig

string

No

The advanced configuration.

-

FullComputeUnit

string

No

The configuration for full synchronization.

2ACU

IncrementalComputeUnit

string

Yes

The configuration for incremental synchronization.

2ACU

KafkaClusterId

string

No

The ID of the Kafka instance. Obtain the ID from the Kafka console.

xxx

KafkaTopic

string

No

The ID of the Kafka topic. Obtain the ID from the Kafka console.

test

StartingOffsets

string

Yes

The initial consumer offset for Kafka. Valid values: begin_cursor, end_cursor, and timestamp. These values correspond to the earliest offset, the latest offset, and a specified time.

begincursor

MaxOffsetsPerTrigger

integer

No

The number of entries to consume in a single batch.

50000

DbName

string

Yes

The user-defined name of the database.

testDB

TableName

string

Yes

The user-defined name of the table.

testTB

OutputFormat

string

No

The output data format.

HUDI

TargetType

string

No

The type of the destination.

OSS

TargetGenerateRule

string

No

The generation rule for the destination.

xxx

AcrossUid

string

No

The ID of the Alibaba Cloud account to which the source Kafka instance belongs.

123************

AcrossRole

string

No

The RAM role of a trusted entity that is an Alibaba Cloud account. For more information about how to create a RAM role, see Create a RAM role for a trusted Alibaba Cloud account. The Alibaba Cloud account that owns the AnalyticDB for MySQL cluster must be added as a trusted account to the RAM role.

aps

SourceRegionId

string

No

The region ID.

cn-hangzhou

JsonParseLevel

integer

No

The number of nested JSON layers to parse. Valid values: 0: No parsing is performed. 1: One layer is parsed. 2: Two layers are parsed. 3: Three layers are parsed. 4: Four layers are parsed. By default, one layer is parsed. For more information about the JSON parsing policy for nested data, see JSON parsing levels and schema field inference examples.

0

DataOutputFormat

string

No

The valid values and their descriptions are as follows: Single: The source is a single-line JSON record. Multi: The source is a JSON array. A single JSON record is returned as the output.

Single

OssLocation

string

No

The destination lakehouse address. This must be a complete OSS path.

oss://test-xx-zzz/yyy/

DatasourceId

integer

No

The data source ID.

1

DataFormatType

string

No

The Kafka message type. Valid values: json, general_canal_json, mongo_canal_json, dataworks_json, and shareplex_json.

json

Response elements

Element

Type

Description

Example

object

HttpStatusCode

integer

The HTTP status code.

200

Data

string

The ID of the created task.

xxx

RequestId

string

The request ID.

1A943417-5B0E-1DB9-A8**-A566****C3

Success

boolean

Indicates whether the request was successful. True: The request was successful. False: The request failed.

true

Code

string

The same as the HTTP status code.

200

Message

string

The returned message.

ok

Examples

Success response

JSON format

{
  "HttpStatusCode": 200,
  "Data": "xxx",
  "RequestId": "1A943417-5B0E-1DB9-A8**-A566****C3",
  "Success": true,
  "Code": "200",
  "Message": "ok"
}

Error codes

See Error Codes for a complete list.

Release notes

See Release Notes for a complete list.