This topic describes how to configure an LDAP user for authentication in E-MapReduce (EMR) Kafka and provides examples.
Prerequisites
A Dataflow cluster is created, and the Kafka and OpenLDAP services are selected for the cluster. For more information about how to create a cluster, see Create a Dataflow Kafka cluster.
Limits
This topic applies to Kafka clusters of a minor version that is later than EMR V3.44.0 or EMR V5.10.0.
You must make sure that the EMR OpenLDAP service or an external LDAP service is deployed in your cluster.
In this topic, the EMR OpenLDAP service is used.
Precautions
If you want to configure a user group for authentication, you must make sure that the memberOf overlay feature is enabled for the LDAP service or that the memberOf property can be configured for an LDAP user.
Configure an LDAP user for authentication
Create a superuser.
NoteIf you created a superuser, skip this step.
A Kafka superuser can access all Kafka resources. In this example, the superuser is used to access the broker nodes and components of Kafka. In this example, a Kafka superuser is added to the EMR OpenLDAP service.
Connect to the master-1-1 node of the cluster in SSH mode. For more information, see Log on to a cluster.
Create a file named kafka.ldif and add the following information to the file.
In this example, the
ldapcommand is run to add the LDAP user whose UID is kafka and password is kafka-secret.dn: uid=kafka,ou=people,o=emr cn: kafka sn: kafka objectClass: inetOrgPerson userPassword: kafka-secret uid: kafkaRun the following command to add the LDAP user:
ldapadd -H ldap://master-1-1:10389 -f kafka.ldif -D ${uid} -w ${rootDnPW}Note${uid}: Replace ${uid} with the value of the admin_dn parameter. You can obtain the value of the admin_dn parameter on the Configure tab of the OpenLDAP service page in the EMR console.${rootDnPW}: Replace ${rootDnPW} with the value of the admin_pwd parameter. You can obtain the value of the admin_pwd parameter on the Configure tab of the OpenLDAP service page in the EMR console.10389: the listening port of the OpenLDAP service.
After the LDAP user is added, you can run the following command to view information about the user:
ldapsearch -w ${rootDnPW} -D ${uid} -H ldap://master-1-1:10389 -b uid=kafka,ou=people,o=emr
Go to the Configure tab of the Kafka service page.
In the top navigation bar, select the region where your cluster resides and select a resource group based on your business requirements.
On the EMR on ECS page, find the desired cluster and click Services in the Actions column.
On the Services tab, find the Kafka service and click Configure.
Modify configuration items and save the configurations.
On the Configure tab of the Kafka service page, click the server.properties tab.
Change the value of the kafka.ssl.config.type configuration item to CUSTOM.
Change the value of the authorizer.class.name configuration item to kafka.security.ldap.authorizer.SimpleLdapAuthorizer.
Save the configurations.
On the Configure tab of the Kafka service page, click Save.
In the dialog box that appears, configure the Execution Reason parameter and turn on Automatically Update Configurations.
Add configuration items and save the configurations.
On the server.properties tab, click Add Configuration Item.
In the Add Configuration Item dialog box, add the configuration items that are described in the following table.
Configuration item
Value
Description
super.users
User:kafka
kafkais the name of the superuser that you created. Replace the username based on your business requirements.For more information about how to create a user, see Step 1.
listener.name.${listener}.sasl.enabled.mechanisms
PLAIN
Replace ${listener} with the name of a listener. Example: sasl_ssl.
listener.name.${listener}.plain.sasl.jaas.config
org.apache.kafka.common.security.plain.PlainLoginModule required username="kafka" password="kafka-secret" emr.kafka.security.ldap.user.name.attribute="uid" emr.kafka.security.ldap.user.base.dn="ou=people,o=emr" emr.kafka.security.ldap.group.name.attribute="cn" emr.kafka.security.ldap.group.base.dn="ou=groups,o=emr" emr.kafka.security.ldap.admin.name.attribute="uid" emr.kafka.security.ldap.admin.base.dn="o=emr" emr.kafka.security.ldap.url="ldaps://master-1-1:10636" emr.kafka.security.ldap.bind.user="admin" emr.kafka.security.ldap.bind.user.password="WMMuhh3P**********" emr.kafka.security.ldap.user.member.of.attribute="memberOf" emr.kafka.security.ldap.group.authorization.support="true" ;Replace ${listener} with the name of a listener. Example: sasl_ssl.
Replace the values of the following options in this configuration item based on the actual configurations of the LDAP user:
username: the name of the superuser that you created. The Kafka service uses the superuser to access nodes.mr.kafka.security.ldap.user.name.attribute: the name property of the LDAP user. You can use the property to obtain the username.emr.kafka.security.ldap.user.base.dn: the base distinguished name (DN) of the LDAP user.emr.kafka.security.ldap.group.name.attribute: the name property of the LDAP user group. You can use the property to obtain the name of a user group.
emr.kafka.security.ldap.group.base.dn: the base DN of the LDAP user group.emr.kafka.security.ldap.admin.name.attribute: the name property of the LDAP admin user.emr.kafka.security.ldap.admin.base.dn: the base DN of the LDAP admin user.emr.kafka.security.ldap.url: the URL of the OpenLDAP service.emr.kafka.security.ldap.bind.use: the username of the LDAP administrator that you can use for user group authentication.emr.kafka.security.ldap.bind.user.password: the password of the LDAP administrator.emr.kafka.security.ldap.user.member.of.attribute: the property of the group to which the user belongs. You can use the property to obtain the group to which the user belongs.emr.kafka.security.ldap.group.authorization.support: specifies whether to support group authentication. If you set this parameter to true and a group to which the user belongs has permissions, the user inherits the permissions of the group.
listener.name.${listener}.plain.sasl.server.callback.handler.class
kafka.security.ldap.authenticator.LdapAuthenticateCallbackHandler
You must replace ${listener} with the name of a listener. Example: sasl_ssl.
sasl.mechanism.inter.broker.protocol
PLAIN
None.
sasl.enabled.mechanisms
PLAIN
None.
Modify other configuration items.
Modify the following configuration items based on your business requirements.
NoteYou must replace the username and password in the value of the kafka.client.jaas.content parameter based on your business requirements. We recommend that you use a superuser.
Configuration file
Parameter
Value
Description
kafka_client_jaas.conf
kafka.client.jaas.content
KafkaClient { org.apache.kafka.common.security.plain.PlainLoginModule required username="kafka" password="kafka-secret"; };The value of this parameter must end with a semicolon (;).
schema-registry.properties
schema_registry_opts
-Djava.security.auth.login.config=/etc/taihao-apps/kafka-conf/kafka-conf/kafka_client_jaas.confIf this parameter already has a value, add the required information to the end of the existing value.
kafka-rest.properties
kafkarest_opts
-Djava.security.auth.login.config=/etc/taihao-apps/kafka-conf/kafka-conf/kafka_client_jaas.confIf this parameter already has a value, add the required information to the end of the existing value.
Restart the Kafka service.
On the Configure tab of the Kafka service page, choose .
In the dialog box that appears, configure the Execution Reason parameter and click OK.
In the Confirm message, click OK.
Example
The operations in this example apply to open source Kafka 2.4.1. For more information about how to configure an LDAP user for authentication in other Kafka versions, see Apache Kafka.
Create test users and test user groups.
If the memberOf overlay feature is disabled for the OpenLDAP service, perform the following operations to enable the feature.
NoteThe method that is used to enable the memberOf overlay feature varies based on the LDAP service that you use. The following operations are only for reference.
You must enable the memberOf overlay feature for all nodes on which the OpenLDAP service is deployed.
You must modify the following parameters in the required configuration files based on your business requirements. Examples:
olcModulepath: For a 32-bit operating system, set this parameter to /usr/lib/openldap.dn: cn=module{0},cn=config: If the cn=module{0}.ldif file already exists in the /etc/openldap/slapd.d/cn=config directory, you must change 0 in cn=module{0}.ldif to 1. If the cn=module{1}.ldif file already exists, you must change 1 in cn=module{1}.ldif to 2. All digits in the file names follow the same rule.dn: olcOverlay={0}memberof,olcDatabase={2}hdb,cn=config: You can configure the olcDatabase parameter based on the value of the olcDatabase parameter in the /etc/openldap/slapd.d/cn=config directory. For example, if the value of theolcDatabaseparameter is {2}hdb.ldif, you must set the olcDatabase parameter to hdb.
Run the
vim memberof_config.ldifcommand to open the memberof_config.ldif file and add the following information to the file:dn: cn=module{0},cn=config cn: module{0} objectClass: olcModuleList objectclass: top olcModuleload: memberof.la olcModulePath: /usr/lib64/openldap dn: olcOverlay={0}memberof,olcDatabase={2}hdb,cn=config objectClass: olcConfig objectClass: olcMemberOf objectClass: olcOverlayConfig objectClass: top olcOverlay: memberof olcMemberOfDangling: ignore olcMemberOfRefInt: TRUE olcMemberOfGroupOC: groupOfNames olcMemberOfMemberAD: member olcMemberOfMemberOfAD: memberOfRun the
vim refint1.ldifcommand to open the refint1.ldif file and add the following information to the file:dn: cn=module{0},cn=config add: olcmoduleload olcmoduleload: refintRun the
vim refint2.ldifcommand to open the refint2.ldif file and add the following information to the file:dn: olcOverlay=refint,olcDatabase={2}hdb,cn=config objectClass: olcConfig objectClass: olcOverlayConfig objectClass: olcRefintConfig objectClass: top olcOverlay: refint olcRefintAttribute: memberof member manager ownerRun the following commands to load the configuration files:
ldapadd -Q -Y EXTERNAL -H ldapi:/// -f memberof_config.ldif ldapmodify -Q -Y EXTERNAL -H ldapi:/// -f refint1.ldif ldapadd -Q -Y EXTERNAL -H ldapi:/// -f refint2.ldif
Create test users.
Run the ldapadd command to create test users. Specify kafka-users.ldif as the file name and add the following information to the file:
dn: uid=kafka-user1,ou=people,o=emr cn: kafka-user1 sn: kafka-user1 uid: kafka-user1 objectClass: inetOrgPerson userPassword: kafka-secret dn: uid=kafka-user2,ou=people,o=emr cn: kafka-user2 sn: kafka-user2 uid: kafka-user2 objectClass: inetOrgPerson userPassword: kafka-secretCreate test user groups.
Run the ldapadd command to create test user groups. Specify kafka-groups.ldif as the file name and add the following information to the file:
dn: cn=kafka-group1,ou=groups,o=emr cn: kafka-group1 objectClass: groupOfNames member: uid=kafka-user1,ou=people,o=emr member: uid=kafka-user2,ou=people,o=emr dn: cn=kafka-group2,ou=groups,o=emr cn: kafka-group2 objectClass: groupOfNames member: uid=kafka-user1,ou=people,o=emrRun the following command on the OpenLDAP server to view the memberOf property of the test users:
ldapsearch -Q -Y EXTERNAL -H ldapi:/// -b ou=people,o=emr memberOfIn the returned information, the memberOf property of kafka-user1 contains kafka-group1 and kafka-group2, and the memberOf property of kafka-user2 contains kafka-group2.
Create client configuration files.
Create a configuration file named client.properties and add the following information to the file:
security.protocol=SASL_PLAINTEXT sasl.mechanism=PLAIN # Replace the username and password based on different clients. sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="kafka" password="kafka-secret";Create a configuration file named kafka-user1.properties and add the following information to the file:
security.protocol=SASL_PLAINTEXT sasl.mechanism=PLAIN # Replace the username and password based on different clients. sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="kafka-user1" password="kafka-secret";Create a configuration file named kafka-user2.properties and add the following information to the file:
security.protocol=SASL_PLAINTEXT sasl.mechanism=PLAIN # Replace the username and password based on different clients. sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="kafka-user2" password="kafka-secret";
Create a test topic named test.
kafka-topics.sh --bootstrap-server core-1-1:9092 --command-config client.properties --create --topic test --replication-factor 3 --partitions 2Grant the required permissions to the test users and test user groups.
For more information about the commands that are used to grant permissions, see Authorization and ACLs.
# Grant permissions to the kafka-group2 user group. kafka-acls.sh --authorizer-properties zookeeper.connect=$KAFKA_ZOOKEEPER --add --allow-principal User:kafka-group2 --allow-host "*" --operation All --cluster kafka-cluster kafka-acls.sh --authorizer-properties zookeeper.connect=$KAFKA_ZOOKEEPER --add --allow-principal User:kafka-group2 --allow-host "*" --operation All --topic test # Grant permissions to the kafka-group1 user group. kafka-acls.sh --authorizer-properties zookeeper.connect=$KAFKA_ZOOKEEPER --add --deny-principal User:kafka-group1 --deny-host "*" --operation All --topic test # Grant permissions to the kafka-user1 user. kafka-acls.sh --authorizer-properties zookeeper.connect=$KAFKA_ZOOKEEPER --add --deny-principal User:kafka-user1 --deny-host "*" --operation Read --topic test # Grant permissions to the kafka-user2 user. kafka-acls.sh --authorizer-properties zookeeper.connect=$KAFKA_ZOOKEEPER --add --allow-principal User:kafka-user2 --allow-host "*" --operation Read --topic testYou can run the following command to view the permissions of the test users and test user groups:
kafka-acls.sh --authorizer-properties zookeeper.connect=$KAFKA_ZOOKEEPER --list --topic testThe following information is returned:
Current ACLs for resource `Topic:LITERAL:test`: User:kafka-group2 has Allow permission for operations: All from hosts: * User:kafka-user2 has Allow permission for operations: Read from hosts: * User:kafka-user1 has Deny permission for operations: Read from hosts: * User:kafka-group1 has Deny permission for operations: All from hosts: *Verify permissions.
The kafka-user1 user can write data to the test topic because the user belongs to the kafka-group2 user group.
kafka-console-producer.sh --broker-list core-1-1:9092 --producer.config ./kafka-user1.properties --topic testWrite code to access the test topic.
# Write code. >a >b >c >dThe kafka-user1 user cannot read data from the test topic.
# Grant required permissions to the consumer group. kafka-acls.sh --authorizer-properties zookeeper.connect=$KAFKA_ZOOKEEPER --add --allow-principal User:kafka-user1 --allow-host "*" --operation All --group kka-user1-consumer kafka-console-consumer.sh --bootstrap-server core-1-1:9092 --consumer.config ./kafka-user1.properties --topic test --group kafka-user1-consumerThe result indicates that the authentication fails, and the user cannot access the test topic.
The kafka-user2 user cannot write data to the test topic.
kafka-console-producer.sh --broker-list core-1-1:9092 --producer.config ./kafka-user2.properties --topic test # Write code. >a # The authentication fails.The result indicates that the authentication fails.
The kafka-user2 user can read data from the test topic.
# Grant required permissions to the consumer group. kafka-acls.sh --authorizer-properties zookeeper.connect=$KAFKA_ZOOKEEPER --add --allow-principal User:kafka-user2 --allow-host "*" --operation All --group kafka-user2-consumer kafka-console-consumer.sh --bootstrap-server core-1-1:9092 --consumer.config ./kafka-user2.properties --topic test --group kafka-user2-consumer --from-beginning # Data is consumed as expected.