Confluent CLI lets you manage ApsaraMQ for Confluent clusters, role-based access control (RBAC) role bindings, and access control lists (ACLs) from the command line. Use it to authenticate to Metadata Service (MDS), retrieve cluster IDs, and configure fine-grained permissions for users and resources.
Before you begin
Make sure you have the following:
An ApsaraMQ for Confluent instance
A Lightweight Directory Access Protocol (LDAP) user with the required permissions. To create or manage users, see Manage users and grant permissions to them.
The MDS endpoint and TLS certificate for your instance (available from the ApsaraMQ for Confluent console)
Install Confluent CLI
Step 1: Download the binary
Download the Confluent CLI binary for your operating system from the Confluent CLI install page, or use a direct link from the following table:
| Operating system | Architecture | Download |
|---|---|---|
| macOS (Darwin) | AMD64 | confluent_darwin_amd64.tar.gz |
| macOS (Darwin) | ARM64 | confluent_darwin_arm64.tar.gz |
| Windows | AMD64 | confluent_windows_amd64.zip |
| Linux | AMD64 | confluent_linux_amd64.tar.gz |
| Linux | ARM64 | confluent_linux_arm64.tar.gz |
| Alpine Linux | AMD64 | confluent_alpine_amd64.tar.gz |
| Alpine Linux | ARM64 | confluent_alpine_arm64.tar.gz |
To verify file integrity, download the checksums file.
Step 2: Add the CLI to your PATH
Set the PATH environment variable to include the directory that contains the extracted binary:
export PATH=<path-to-cli>:$PATHReplace <path-to-cli> with the absolute path to the directory where you extracted the Confluent CLI binary.
Step 3 (optional): Change the data directory
By default, the Confluent CLI stores logs and data in a local directory. If this directory does not have enough storage space, set the CONFLUENT_CURRENT environment variable to a different location:
export CONFLUENT_CURRENT=<path-to-confluent-local-data>Step 4: Verify the installation
Run the following command:
confluentIf the installation is successful, information similar to the following is returned:
Manage your Confluent Platform.
Usage:
confluent [command]
Available Commands:
audit-log Manage audit log configuration.
cloud-signup Sign up for Confluent Cloud.
cluster Retrieve metadata about Confluent Platform clusters.
completion Print shell completion code.
configuration Configure the Confluent CLI.
connect Manage Kafka Connect.
context Manage CLI configuration contexts.
flink Manage Apache Flink.
help Help about any command
iam Manage RBAC, ACL and IAM permissions.
kafka Manage Apache Kafka.
ksql Manage ksqlDB.
local Manage a local Confluent Platform development environment.
login Log in to Confluent Cloud or Confluent Platform.
logout Log out of Confluent Platform.
plugin Manage Confluent plugins.
prompt Add Confluent CLI context to your terminal prompt.
schema-registry Manage Schema Registry.
secret Manage secrets for Confluent Platform.
shell Start an interactive shell.
update Update the Confluent CLI.
version Show version of the Confluent CLI.
Flags:
--version Show version of the Confluent CLI.
-h, --help Show help for this command.
--unsafe-trace Equivalent to -vvvv, but also log HTTP requests and responses which might contain plaintext secrets.
-v, --verbose count Increase verbosity (-v for warn, -vv for info, -vvv for debug, -vvvv for trace).
Use "confluent [command] --help" for more information about a command.Log in to MDS
Authenticate to the Confluent Platform Metadata Service (MDS) before running cluster or permission management commands. MDS uses HTTPS for encrypted transmission.
Gather the following information from the ApsaraMQ for Confluent console:
| Information | Where to find it |
|---|---|
| LDAP username and password | Users page |
| MDS endpoint | Access Links and Ports page |
| TLS certificate | Certificate section on the Instance Details page |
Run the following command to log in:
confluent login \
--url <mds-endpoint> \
--certificate-authority-path <path-to-certificate.pem>When prompted, enter your LDAP username and password:
Enter your Confluent credentials:
Username: <your-username>
Password: <your-password>| Placeholder | Description | Example |
|---|---|---|
<mds-endpoint> | Public or private MDS endpoint | https://pub-kafka-xxxxxxxxx.csp.aliyuncs.com:443 |
<path-to-certificate.pem> | Path to the downloaded TLS certificate | /etc/confluent/certs/ca.pem |
A successful login returns to the command prompt without an error message.
Retrieve cluster IDs
Many Confluent CLI commands require a cluster ID. Use confluent cluster describe with the service endpoint to retrieve it.
Get the service endpoints from the Access Links and Ports page in the ApsaraMQ for Confluent console. For example, the ksqlDB public endpoint uses the format https://pub-ksqldb-xxxxxxxxxxx.csp.aliyuncs.com:443.
# Kafka cluster
confluent cluster describe --url <mds-url>
# Schema Registry cluster
confluent cluster describe --url <schema-registry-url>
# ksqlDB cluster
confluent cluster describe --url <ksqldb-url>Manage RBAC permissions
ApsaraMQ for Confluent uses predefined RBAC roles to manage permissions. RBAC assigns broad, role-based permissions to users at the cluster or resource level. For the full list of available roles, see Use Predefined RBAC Roles in Confluent Platform.
For more examples, see Examples of RBAC authorization using Confluent CLI.
Log in to MDS before running RBAC commands. All commands in this document support the following global flags: -h, --help (show help), --unsafe-trace (equivalent to -vvvv, also logs HTTP requests and responses), and -v, --verbose count (increase verbosity).
List Identity and Access Management (IAM) roles
List all available IAM roles. IAM roles define the access permissions of users and services to resources based on RBAC.
confluent iam rbac role listDescribe an IAM role
View the details of a specific IAM role:
confluent iam rbac role describe <role-name>Flags
| Flag | Description |
|---|---|
--client-cert-path | Path to client certificate for mTLS authentication |
--client-key-path | Path to client private key for mTLS authentication |
--context | CLI context name |
-o, --output | Output format: human, json, or yaml (default: human) |
Create a role binding
Bind an IAM role to a user or service principal:
confluent iam rbac role-binding create [flags]Required flags
| Flag | Description |
|---|---|
--role | Role name to assign |
--principal | Principal in User:<username> format |
Scope flags
| Flag | Description |
|---|---|
--kafka-cluster | Kafka cluster ID |
--schema-registry-cluster | Schema Registry cluster ID |
--ksql-cluster | ksqlDB cluster ID |
--connect-cluster | Kafka Connect cluster ID |
--cmf | Confluent Managed Flink (CMF) ID |
--flink-environment | Flink environment ID |
Other flags
| Flag | Description |
|---|---|
--resource | Resource in Prefix:ID format |
--prefix | Treat the resource name as a prefix pattern |
--cluster-name | Cluster name for role binding listings |
--context | CLI context name |
--client-cert-path | Path to client certificate for mTLS authentication |
--client-key-path | Path to client private key for mTLS authentication |
-o, --output | Output format: human, json, or yaml (default: human) |
Examples
Grant DeveloperRead on all Schema Registry subjects to user sr-read:
confluent iam rbac role-binding create \
--principal User:sr-read \
--role DeveloperRead \
--resource Subject:* \
--kafka-cluster <kafka-cluster-id> \
--schema-registry-cluster <schema-registry-cluster-id>Grant SystemAdmin on the Schema Registry cluster to user sr-admin:
confluent iam rbac role-binding create \
--principal User:sr-admin \
--role SystemAdmin \
--kafka-cluster <kafka-cluster-id> \
--schema-registry-cluster <schema-registry-cluster-id>List role bindings
View existing role bindings:
confluent iam rbac role-binding list [flags]Key flags
| Flag | Description |
|---|---|
--principal | Filter by principal ID. If omitted, lists all principals. |
--current-user | List role bindings for the current user |
--role | Filter by role name. If --principal is omitted, lists all principals with this role. |
--kafka-cluster | Kafka cluster ID scope |
--schema-registry-cluster | Schema Registry cluster ID scope |
--ksql-cluster | ksqlDB cluster ID scope |
--connect-cluster | Kafka Connect cluster ID scope |
--cmf | Confluent Managed Flink (CMF) ID scope |
--flink-environment | Flink environment ID scope |
--resource | Resource in Prefix:ID format |
--inclusive | Include role bindings for nested scopes |
--client-cert-path | Path to client certificate for mTLS authentication |
--client-key-path | Path to client private key for mTLS authentication |
--cluster-name | Cluster name, which specifies the cluster scope |
-o, --output | Output format: human, json, or yaml (default: human) |
Examples
List all users with DeveloperRead on a Schema Registry cluster:
confluent iam rbac role-binding list \
--role DeveloperRead \
--kafka-cluster <kafka-cluster-id> \
--schema-registry-cluster <schema-registry-cluster-id>List all users with SystemAdmin on a Schema Registry cluster:
confluent iam rbac role-binding list \
--role SystemAdmin \
--kafka-cluster <kafka-cluster-id> \
--schema-registry-cluster <schema-registry-cluster-id>Delete a role binding
Remove an IAM role from a user:
confluent iam rbac role-binding delete [flags]This command uses the same flags as role-binding create, plus:
| Flag | Description |
|---|---|
--force | Skip the deletion confirmation prompt |
Examples
Remove DeveloperRead from user sr-read on a Schema Registry cluster:
confluent iam rbac role-binding delete \
--principal User:sr-read \
--role DeveloperRead \
--resource Subject:* \
--kafka-cluster <kafka-cluster-id> \
--schema-registry-cluster <schema-registry-cluster-id>Remove SystemAdmin from user sr-admin on a Schema Registry cluster:
confluent iam rbac role-binding delete \
--principal User:sr-admin \
--role SystemAdmin \
--kafka-cluster <kafka-cluster-id> \
--schema-registry-cluster <schema-registry-cluster-id>Manage ACLs
ACLs provide fine-grained, resource-level permissions for Kafka clusters. While RBAC assigns predefined roles to users, ACLs let you control exactly which operations a specific user or group can perform on individual resources such as topics and consumer groups.
An ACL rule consists of five components:
| Component | Description | Example |
|---|---|---|
| Principal | The user or group | User:Bob, User:* (all users) |
| Host | IP address the principal connects from | 198.51.xx.xx, * (any host) |
| Resource | The Kafka resource | --topic test-topic, --consumer-group my-group |
| Operation | The permitted action | READ, WRITE, CREATE, DELETE |
| Permission | Allow or deny | --allow, --deny |
Each cluster supports up to 1,000 ACLs.
ACLs created for IAM roles in ApsaraMQ for Confluent support rules on IPv6 addresses but not on CIDR blocks or subnets.
By default, access from addresses not covered by an ACL is denied.
The
--denyflag takes precedence over the--allowflag.Use the wildcard character
*with--principalto apply a rule to all users.Use the
--prefixflag to match resources by name prefix. For example,--topic abc- --prefixapplies the rule to all topics whose names start withabc-.
For the full command reference, see confluent iam acl.
Create an ACL
confluent iam acl create [flags]Required flags
| Flag | Description |
|---|---|
--kafka-cluster | Kafka cluster ID |
--principal | Principal in User:<name> or Group:<name> format |
--operation | Operation: all, alter, alter-configs, cluster-action, create, delete, describe, describe-configs, idempotent-write, read, write |
Optional flags
| Flag | Description |
|---|---|
--allow | Allow access |
--deny | Deny access |
--host | IP address to restrict access to (default: *) |
--topic | Topic resource. Combined with --prefix, applies to all topics matching the prefix. |
--consumer-group | Consumer group resource |
--transactional-id | Transactional ID resource |
--cluster-scope | Apply the ACL to the Kafka cluster itself |
--prefix | Treat the resource name as a prefix pattern |
--client-cert-path | Path to client certificate for mTLS authentication |
--client-key-path | Path to client private key for mTLS authentication |
--context | CLI context name |
Examples
Allow user Bob at IP address 198.51.xx.xx to read from test-topic:
confluent iam acl create \
--allow \
--principal User:Bob \
--operation READ \
--host 198.51.xx.xx \
--topic test-topic \
--kafka-cluster <kafka-cluster-id>Allow all users to read from test-topic, but deny user BadBob:
# Allow all users
confluent iam acl create \
--allow \
--principal User:'*' \
--operation READ \
--topic test-topic \
--kafka-cluster <kafka-cluster-id>
# Deny BadBob (--deny takes precedence over --allow)
confluent iam acl create \
--deny \
--principal User:BadBob \
--operation READ \
--topic test-topic \
--kafka-cluster <kafka-cluster-id>List ACLs
confluent iam acl list [flags]The list command accepts the same filtering flags as create (such as --principal, --operation, --topic), plus:
| Flag | Description |
|---|---|
-o, --output | Output format: human, json, or yaml (default: human) |
Example
List all ACLs for a Kafka cluster:
confluent iam acl list --kafka-cluster <kafka-cluster-id>Delete an ACL
confluent iam acl delete [flags]Required flags
| Flag | Description |
|---|---|
--kafka-cluster | Kafka cluster ID |
--principal | Principal in User:<name> or Group:<name> format |
--operation | Operation: all, alter, alter-configs, cluster-action, create, delete, describe, describe-configs, idempotent-write, read, write |
--host | IP address (default: *) |
The delete command also accepts the same optional flags as create (such as --topic, --consumer-group, --prefix), plus:
| Flag | Description |
|---|---|
--force | Skip the deletion confirmation prompt |
Example
Delete the ACL that allows user Bob to read from test-topic:
confluent iam acl delete \
--allow \
--principal User:Bob \
--operation READ \
--host 198.51.xx.xx \
--topic test-topic \
--kafka-cluster <kafka-cluster-id>References
For the complete Confluent CLI command reference, see Confluent CLI Command Reference.