All Products
Search
Document Center

ApsaraMQ for Kafka:ksqlDB

Last Updated:Mar 11, 2026

ksqlDB is a streaming SQL engine for Apache Kafka. It replaces custom stream processing code with SQL statements that you run against Kafka topics through a built-in GUI editor.

With ksqlDB, you can:

  • Query streaming data in real time -- run continuous SQL queries against live Kafka topics

  • Create materialized views -- maintain always-up-to-date query results backed by Kafka

  • Process streams with SQL -- perform aggregation, joins, windowed operations, and session-based operations without writing application code

ksqlDB consolidates the stream processing engine and connectors into a single system. For more about the underlying architecture, see ksqlDB for Confluent Platform.

Architecture

Traditional stream processing requires separate systems for connectors, a processing engine, and a storage layer. ksqlDB merges all three into a single platform.

  • Traditional stream processing architecture

    Traditional stream processing architecture

  • ksqlDB-based stream processing architecture

    ksqlDB-based stream processing architecture

Prerequisites

Before you begin, make sure that you have:

  • An ApsaraMQ for Confluent instance

  • A topic created for ksqlDB (this guide uses a topic named ksql_test)

  • A schema configured with Avro validation for the topic

  • Schema validation enabled on the topic

  • Appropriate RBAC permissions for the ksqlDB cluster

The following sections walk you through each prerequisite.

Create and configure a topic

  1. Create a topic. This guide uses a topic named ksql_test.

  2. Create a schema. Select Avro as the validation mode and add the following schema definition:

    {
        "namespace": "io.confluent.examples.clients.basicavro",
        "type": "record",
        "name": "Payment",
        "fields": [
            {
                "name": "id",
                "type": "string"
            },
            {
                "name": "amount",
                "type": "double"
            }
        ]
    }

    This schema defines a Payment record with two fields: id (string) and amount (double). All messages produced to ksql_test must conform to this schema.

  3. Enable schema validation for the ksql_test topic.

Set up authorization

ApsaraMQ for Confluent uses role-based access control (RBAC) to manage ksqlDB cluster access. This guide uses a user named test.

  1. Create the test user and assign the following roles. For more information, see Manage users and grant permissions to them.

    UsernameCluster typeResource typeRole
    testKafka clusterClusterSystemAdmin
    testKSQLClusterResourceOwner
    testSchema RegistryClusterSystemAdmin
  2. Grant read permissions on the ksql_test topic to ksql, the default ksqlDB user:

    UsernameCluster typeResource typeRole
    ksqlKafka clusterTopicDeveloperRead

Create a stream and query data

With the topic, schema, and permissions in place, open the ksqlDB editor in Control Center to create a stream and run queries.

Open the ksqlDB editor

  1. Log on to the ApsaraMQ for Confluent console. In the left-side navigation pane, click Instances.

  2. In the top navigation bar, select the region where your instance resides. On the Instances page, click the instance name.

  3. In the upper-right corner of the Instance Details page, click Log on to Control Center.

  4. On the Home page of Control Center, click the controlcenter.clusterk card to go to the Cluster overview page.

    Control Center home page

  5. In the left-side navigation pane, click ksqlDB, then click the name of the ksqlDB cluster.

  6. On the cluster details page, click the Editor tab to open the SQL editor.

Create a stream

Run the following statement in the editor to create a stream backed by the ksql_test topic:

CREATE STREAM ksql_test_stream WITH (KAFKA_TOPIC='ksql_test',VALUE_FORMAT='AVRO');

Each parameter serves a specific purpose:

ParameterDescription
KAFKA_TOPICThe Kafka topic that backs the stream. The stream reads events from this topic. In this example, ksql_test is the topic created earlier.
VALUE_FORMATSerialization format for message values. AVRO matches the Avro schema you configured on the topic. Other supported formats include JSON and PROTOBUF.

For more stream creation options, see Quick Start.

Query the stream

Run a push query to continuously receive new events as they arrive:

SELECT * FROM ksql_test_stream EMIT CHANGES;

This is a push query. It runs continuously and pushes each new event to the client as it arrives on the underlying topic. The query does not terminate until you stop it.

ksqlDB also supports pull queries, which follow a traditional request-response model: they return a point-in-time result from a materialized view and then terminate, similar to a standard database SELECT.

Verify with a test message

To confirm that the stream processes events correctly, produce a test message and observe the push query output.

  1. Start the push query. In the ksqlDB Editor tab, enter the following statement and click Run query:

    SELECT * FROM ksql_test_stream EMIT CHANGES;

    ksqlDB editor with push query

  2. Produce a test message.

    1. Open a new Control Center window.

    2. Navigate to the ksql_test topic details page and click the Messages tab.

    3. Click Produce a new message. In the Produce a new message panel, enter the following message body and click Produce:

      {
          "id": "Tome",
          "amount": 18
      }

    Produce a new message panel

  3. Confirm the result. Switch back to the editor window where the push query is running. The test message appears in the query output.

    Stream query result showing the test message

See also