This topic describes the system architecture, benefits, scenarios, and limits of the MaxCompute Query Acceleration (MCQA) feature.

Benefits

The MCQA feature of MaxCompute provides the following benefits:
  • Accelerates the execution of small- and medium-sized query jobs and reduces the time that is required to complete such a query job from minutes to seconds. The MCQA feature is compatible with other query features of MaxCompute.
  • Supports mainstream business intelligence (BI) tools to perform ad hoc queries or BI analysis.
  • Uses an independent resource pool that does not occupy offline computing resources. MCQA can automatically identify query jobs to reduce the load of the job queue and improve user experience.
  • Allows you to write the query results of MCQA jobs to a temporary cache. If you run the same query job later, MaxCompute preferentially returns the query results in the cache to accelerate the execution of the job.
Note If an MCQA job is rolled back to an SQL query job, you are charged based on the billing method of the SQL query job.

Scenarios

The following table describes the scenarios for which MCQA is suitable.
Scenario Description Applicable scope
Ad hoc query You can use MCQA to optimize the query performance of small- and medium-sized datasets (less than 100 GB) and perform low-latency queries on MaxCompute tables. This accelerates data development and data analysis. You can specify query criteria based on your business requirements, obtain query results, and adjust the query logic. In this scenario, the query latency must be within dozens of seconds. Users are data developers or data analysts who are skilled in SQL and prefer to use the client tools that they are familiar with to analyze queries.
BI analysis If you use MaxCompute to build an enterprise-class data warehouse, MaxCompute performs extract, transform, load (ETL) operations to process data into business-oriented and consumable aggregate data. MCQA features low latency and supports elastic parallelism and data caching. You can use MCQA with the partitions and buckets in MaxCompute tables to run parallel jobs, generate reports, analyze statistics, and analyze fixed reports at a low cost. In most cases, the query object is aggregate data. This scenario is suitable for multidimensional queries, fixed queries, or high-frequency queries that contain small amounts of data. In this scenario, queries are latency-sensitive, and the results are returned in seconds. For example, the latency for most queries is less than 5 seconds. The time that is required to complete each query varies based on the data size and query complexity.
Detailed queries and analysis of large amounts of data MCQA can automatically identify the size of query jobs. MCQA can respond to and process small-sized jobs at the earliest opportunity, and can allocate the resources that are required for large-sized jobs. This helps analysts run query jobs of different sizes and complexities. In this scenario, large amounts of historical data are queried. The size of the valid data that is queried is small, and the requirement for latency is low. Users are business analysts who want to gain business insights from data, explore potential business opportunities, and validate business assumptions.

Limits

MCQA supports only data query statements that start with SELECT. If you submit statements that are not supported by MCQA, you can use the MaxCompute client, a MaxCompute Java Database Connectivity (JDBC) driver, or MaxCompute SDK to roll back MCQA jobs to SQL query jobs based on configurations. Other tools do not support the rollback from an MCQA job to an SQL query job. To roll back MCQA jobs to SQL query jobs by using the MaxCompute client, a MaxCompute JDBC driver, or MaxCompute SDK, make sure that the following related version requirement is met:
  • The version of the MaxCompute client must be V0.40.8 or later. You can click odpscmd to download the required version of the MaxCompute client.
  • The version of the MaxCompute JDBC driver must be V3.3.0 or later. You can click ODPS-JDBC to download the required version of the JDBC driver.
  • The version of MaxCompute SDK must be V0.40.8-public or later. You can click ODPS-SDK to download the required version of MaxCompute SDK.
The following table describes the limits of MCQA.
Item Description
Feature
  • The MCQA feature is available in MaxCompute Standard Edition that uses the pay-as-you-go billing method.
  • The MCQA feature supports the subscription billing method.
  • The MCQA feature is unavailable in MaxCompute Developer Edition. You must upgrade MaxCompute to Standard Edition.
Query
  • A maximum of 2,000 workers can run in parallel in a job.
  • If you submit an MCQA job from the MaxCompute client, the default timeout period is 30 seconds. If you submit an MCQA job from the ad hoc query module of DataWorks, the default timeout period is 20 seconds. If the MCQA job times out, the system automatically rolls back the MCQA job to an SQL query job.
  • MCQA can cache only the data that is stored in ALIORC tables into memory to accelerate queries.
  • You cannot use MCQA to query data from external tables.
Query parallelism
  • Requirements for the subscription billing method
    • Free edition: No quota groups of the interactive resource type are configured for MCQA jobs.
      A maximum of five MCQA jobs can run in parallel in a project every day and a maximum of 500 MCQA jobs can run in a project every day. By default, additional MCQA jobs are rolled back to SQL query jobs. If you configure to forbid the rollback of MCQA jobs, the following error is returned:
      ODPS-1800001: Session exception - Failed to submit sub-query in session because:Prepaid project run outoffree query quota.
    • Quota groups of the interactive resource type are configured for MCQA jobs.
      • One tenant can create only one quota group of the interactive resource type for MCQA jobs.
      • A maximum of 120 MCQA jobs can run in parallel in each project. If the number of parallel MCQA jobs reaches the upper limit, additional MCQA jobs are rolled back to SQL query jobs.
      • When you configure a quota group of the interactive resource type, make sure that the minimum quota of reserved compute units (CUs) that are used for MCQA jobs is the same as the maximum quota of reserved CUs that are used for MCQA jobs. Otherwise, the quota group does not take effect.
      • The quota group of the interactive resource type must meet the following requirements. If the quota group of the interactive resource type does not meet the following requirements, you cannot submit MCQA jobs to this quota group.
        • The minimum number of reserved CUs [minCU] is the same as the maximum number of reserved CUs [maxCU].
        • The number of reserved CUs is greater than or equal to 50.
      • After you configure a quota group of the interactive resource type, you can submit only jobs that can be identified as MCQA jobs to the quota group of the interactive resource type for all projects and MCQA can no longer be used free of charge.
      • Quota groups of the interactive resource type cannot be used as the default quota group for projects. When you use MCQA, interactive quota groups automatically take effect for all projects. You do not need to associate the quota groups with projects.
  • Requirements for the pay-as-you-go billing method

    A maximum of 120 MCQA jobs can run in parallel in a MaxCompute project. If the number of parallel MCQA jobs reaches the upper limit, additional MCQA jobs are rolled back to SQL query jobs.

Caching mechanism

MaxCompute allows you to write the query results of MCQA jobs to a temporary cache. If you run the same query job later, MaxCompute preferentially returns the query results in the cache to accelerate the execution of the job.

MaxCompute creates a temporary dataset for each MCQA job to cache the query results. The owner of the temporary dataset is the user who runs the MCQA job to generate query results. Temporary datasets are invisible to users. Users cannot view the content of the temporary datasets. MaxCompute automatically grants users who run query jobs the permissions to access the temporary datasets.

MaxCompute deletes the cached results in the following scenarios:
  • The resource usage of a MaxCompute project is high.
  • The cached results become invalid due to changes in the tables or views that are referenced by the cached results.
  • The cached results expired.

Limits

The caching mechanism has the following limits:
  • If you want to query data from the cached results, the MCQA job that you run must be the same as the original MCQA job that generates the cached results and the context of the MCQA job that you run must be the same as that of the original MCQA job. When you rerun an MCQA job, MaxCompute directly reuses the cached results.
  • If the tables or views that are referenced by the cached results change, the cached results become invalid. If you rerun the same job, you cannot obtain the cached data.
  • The maximum cache size of a temporary dataset is 10 GB.

Billing rules

You are not charged for the storage or computations of cached results. This significantly reduces resource usage costs.

Verification

You can obtain the Logview URL of an MCQA job and use the URL to access Logview. On the Job Details tab, verify that the query results of the MCQA job are written to the cache. The following figure shows the Logview of the query job. For more information about Logview, see Use Logview V2.0 to view job information. Cache

Enable MCQA for subscription MaxCompute projects

Procedure

To enable MCQA to accelerate queries for subscription MaxCompute projects, perform the following steps:
  1. Log on to the MaxCompute console and select a region in the upper-left corner of the console.
  2. In the left-side navigation pane, click Quotas.
  3. On the Quotas page, find the level-1 quota that you want to configure and click Quota Configuration in the Actions column.
  4. Configure a quota template.
    1. On the Quota Templates tab of the Quota Configuration page, click Add Quota.
    2. In the Add Quota dialog box, click + Add Quota and configure the Name and Type parameters.

      Enter a custom name in the Name field. Then, select Interactive from the Type drop-down list.

    3. Click OK.
  5. Configure a quota plan.
    1. On the Quota Plans tab of the Quota Configuration page, click Add Plan to create a quota plan. You can also find a desired plan and click Edit in the Actions column to edit the plan.
    2. In the Create Quota Plan or Edit Quota Plan dialog box, enter quota values in the Reserved CUs [minCU,maxCU] column.
      When you enter quota values in the Reserved CUs [minCU,maxCU] column, take note of the following points:
      • The minimum number of reserved CUs must be equal to the maximum number of reserved CUs.
      • The minimum number of reserved CUs must be greater than or equal to 50. If you do not require interactive resources, enter 0 for the quota of the interactive resource type.
      • Quotas of the interactive resource type do not support elastically reserved CUs.
    3. Click OK.
    4. On the Quota Plans tab, find the quota plan that you created or edited, and click Apply Immediately in the Actions column.
  6. Configure a time plan.

    You can configure a time plan to apply different quota plans at different time points every day. This implements the time-specific logic for quota configuration.

Rollback policy

If an MCQA job is rolled back to an SQL query job due to usage limits, the quota that is dedicated to MCQA jobs for subscription projects is rolled back to the quota that is associated with the current project.

Tools and features for enabling MCQA

You can use one of the following tools or features to enable MCQA:
  • MaxCompute client
  • Ad hoc query or data analytics feature of DataWorks
  • MaxCompute JDBC driver
  • MaxCompute SDK
  • MaxCompute Studio
  • PyODPS
  • SQLAlchemy

Enable MCQA on the MaxCompute client

  1. Download the latest version of the MaxCompute client.
  2. Install and configure the client. For more information, see Install and configure the MaxCompute client.
  3. Modify the odps_config.ini configuration file in the conf folder, and add the following commands at the end of the configuration file:
    enable_interactive_mode=true -- Enable MCQA.
    interactive_auto_rerun=true  -- Allow an MCQA job to be automatically rolled back to an SQL query job if the MCQA job fails.
  4. Run the MaxCompute client in the bin folder. Run ./bin/odpscmd in Linux, and run ./bin/odpscmd.bat in Windows. If the following information appears, the MaxCompute client is running as expected. Client runs as expected
  5. If the returned results in Logview contain the following information after you run the query job, MCQA is enabled. logviiew

Enable MCQA by using the ad hoc query or data analytics feature of DataWorks

By default, MCQA is enabled on the Ad-Hoc Query or Manually Triggered Workflow page in the DataWorks console. Manual operations are not required. If you want to disable MCQA, fill in the DingTalk group application form to join the DingTalk group for processing.

Run a query job on the Ad-Hoc Query page. If the returned results contain the following information, MCQA is enabled. For more information about Ad-Hoc Query, see Create an ad hoc query. Ad-Hoc Query
Run a query job on the Manually Triggered Workflow page. If the returned results contain the following information, MCQA is enabled. For more information about Manually Triggered Workflow, see Manage manually triggered workflows. Manually Triggered Workflow

Enable MCQA by using the MaxCompute JDBC driver

If you use the MaxCompute JDBC driver to connect to MaxCompute, perform the following steps to enable MCQA. For more information, see Usage notes.
  1. Download the JDBC JAR file that supports MCQA or download the source code that can be compiled.
  2. Add the following dependency to the pom.xml file in the Maven repository:
    <dependency>
      <groupId>com.aliyun.odps</groupId>
      <artifactId>odps-jdbc</artifactId>
      <version>3.3.0</version>
      <classifier>jar-with-dependencies</classifier>
    </dependency>
  3. Create a Java program based on the source code to adapt to your business information. For more information, see MaxCompute JDBC. Sample code:
    String accessId = "your_access_id";
    String accessKey = "your_access_key";
    String conn = "jdbc:odps:http://service.<regionid>.maxcompute.aliyun.com/api?project=<your_project_name>"&accessId&accessKey&charset=UTF-8&interactiveMode=true&alwaysFallback=false&autoSelectLimit=1000000000";
    Statement stmt = conn.createStatement();
    -- Replace your_access_id with the AccessKey ID of your Alibaba Cloud account and your_access_key with the AccessKey secret of your Alibaba Cloud account. Replace your_project_name with the name of the project for which MCQA is enabled. 
    Connection conn = DriverManager.getConnection(conn, accessId, accessKey);
    Statement stmt = conn.createStatement();
    String tableName = "testOdpsDriverTable";
    stmt.execute("drop table if exists " + tableName);
    stmt.execute("create table " + tableName + " (key int, value string)");
    Configure the parameters in String conn to optimize the processing logic. The following table describes the parameters.
    Parameter Description
    enableOdpsLogger This parameter is used to display logs. If you do not configure Simple Logging Facade for Java (SLF4J), we recommend that you set this parameter to True.
    fallbackForUnknownError Default value: False. If this parameter is set to True, the system switches to the offline mode when an unknown error occurs.
    fallbackForResourceNotEnough Default value: False. If this parameter is set to True, the system switches to the offline mode when resources are insufficient.
    fallbackForUpgrading Default value: False. If this parameter is set to True, the system switches to the offline mode during an upgrade.
    fallbackForRunningTimeout Default value: False. If this parameter is set to True, the system switches to the offline mode when a connection times out.
    fallbackForUnsupportedFeature Default value: False. If this parameter is set to True, the system switches to the offline mode in scenarios in which MCQA is not supported.
    alwaysFallback Default value: False. If this parameter is set to True, the system switches to the offline mode in one of the preceding scenarios. This parameter is supported only for the MaxCompute JDBC driver of V3.2.3 or later.
Examples
  • Example 1: Enable MCQA on Tableau.

    Add interactiveMode=true to the URL of Tableau Server. We recommend that you also add enableOdpsLogger=true to display logs. For more information, see Configure MaxCompute JDBC on Tableau.

    Sample URL:
    http://service.cn-beijing.maxcompute.aliyun.com/api?
    project=****_beijing&interactiveMode=true&enableOdpsLogger=true&autoSelectLimit=1000000000"
    To enable MCQA for specific tables in a MaxCompute project, add the table_list=table_name1, table_name2 property to the URL of Tableau Server. Then, use this property to specify the tables for which you want to enable MCQA. Separate table names with commas (,). If you specify an excessive number of tables, access to the URL of Tableau Server becomes time-consuming. We recommend that you specify only the required tables in the URL of Tableau Server. If a table contains a large number of partitions, we recommend that you do not use data from all partitions as the data source. You can specify partitions to obtain required data or run custom SQL queries to obtain the required data. Sample code:
    http://service.cn-beijing.maxcompute.aliyun.com/api?project=****_beijing
    &interactiveMode=true&alwaysFallback=true&enableOdpsLogger=true&autoSelectLimit=1000000000"
    &table_list=orders,customers
  • Example 2: Enable MCQA on SQL Workbench/J.

    After you configure the MaxCompute JDBC driver, you can use MCQA on SQL Workbench/J by modifying the JDBC URL that you specified on the profile configuration page. For more information about how to configure the profile, see Configure MaxCompute JDBC on SQL Workbench/J.

    The following example shows the format of the URL that must be configured:
    jdbc:odps:<MaxCompute_endpoint>?
    project=<MaxCompute_project_name>&accessId=<AccessKey ID>&accessKey=<AccessKey Secret>
    &charset=UTF-8&interactiveMode=true&autoSelectLimit=1000000000"
    The following table describes the parameters.
    Parameter Description
    MaxCompute_endpoint The endpoint of the region in which MaxCompute resides. For more information, see Endpoints.
    MaxCompute_project_name The name of the MaxCompute project.
    AccessKey ID The AccessKey ID of your Alibaba Cloud account that can access the project.

    You can obtain the AccessKey ID from the AccessKey Pair page.

    AccessKey Secret The AccessKey secret that corresponds to the AccessKey ID.

    You can obtain the AccessKey secret from the AccessKey Pair page.

    charset=UTF-8 The encoding format of the character set.
    interactiveMode Specifies whether to enable MCQA. If this parameter is set to true, MCQA is enabled.
    autoSelectLimit This parameter is required when the number of data records exceeds 1 million.

Enable MCQA by using MaxCompute SDK for Java

For more information about MaxCompute SDK for Java, see SDK for Java. You must add a specified dependency to the pom.xml file in the Maven repository. Sample dependency:
<dependency>
  <groupId>com.aliyun.odps</groupId>
  <artifactId>odps-sdk-core</artifactId>
  <version>3.3.0</version>
</dependency>
The following sample code provides an example on how to create a Java program:
import com.aliyun.odps.Odps;
import com.aliyun.odps.OdpsException;
import com.aliyun.odps.OdpsType;
import com.aliyun.odps.account.Account;
import com.aliyun.odps.account.AliyunAccount;
import com.aliyun.odps.data.Record;
import com.aliyun.odps.data.ResultSet;
import com.aliyun.odps.sqa.*;

import java.io.IOException;
import java.util.*;

public class SQLExecutorExample {

    public static void SimpleExample() {
        // Specify the Alibaba Cloud account and project information. 
        Account account = new AliyunAccount("<your_access_id>", "<your_access_key>");
        Odps odps = new Odps(account);
        odps.setDefaultProject("<your_project_name>");
        odps.setEndpoint("http://service.<regionid>.maxcompute.aliyun.com/api");

        // Prepare to build an SQLExecutor. 
        SQLExecutorBuilder builder = SQLExecutorBuilder.builder();

        SQLExecutor sqlExecutor = null;
        try {
            // run in offline mode or run in interactive mode
            if (false) {
                // Create an SQLExecutor that runs offline SQL queries by default. 
                sqlExecutor = builder.odps(odps).executeMode(ExecuteMode.OFFLINE).build();
            } else {
                // Create an SQLExecutor that runs SQL queries with MCQA enabled by default. Make sure that the system automatically rolls back to the offline query mode if an SQL query with MCQA enabled fails. 
                sqlExecutor = builder.odps(odps).executeMode(ExecuteMode.INTERACTIVE).fallbackPolicy(FallbackPolicy.alwaysFallbackPolicy()).build();
            }
            // Pass special query settings if required. 
            Map<String, String> queryHint = new HashMap<>();
            queryHint.put("odps.sql.mapper.split.size", "128");
            // Submit a query job. You can pass hints. 
            sqlExecutor.run("select count(1) from test_table;", queryHint);

            // List the System.out.println() interfaces that can be used to query common information. 
            // UUID
            System.out.println("ExecutorId:" + sqlExecutor.getId());
            // Query the Logview URL of the current query job. 
            System.out.println("Logview:" + sqlExecutor.getLogView());
            // Query the instance on which the current query job is run. In interactive mode, multiple query jobs may be run on the same instance. 
            System.out.println("InstanceId:" + sqlExecutor.getInstance().getId());
            // Query the progress of the current query job. You can check the progress bar in the console. 
            System.out.println("QueryStageProgress:" + sqlExecutor.getProgress());
            // Query the changelogs about the status of the current query job, such as rollback messages. 
            System.out.println("QueryExecutionLog:" + sqlExecutor.getExecutionLog());

            // Obtain the results of query jobs by calling one of the following API operations: 
            if(false) {
                // Query the results of all query jobs. The API operation that you called is a synchronous operation and may occupy a thread until the query succeeds or fails. 
                // Write the results of all query jobs to the memory at the same time. To prevent memory issues, we recommend that you do not perform this operation if the amount of data is large. 
                List<Record> records = sqlExecutor.getResult();
                printRecords(records);
            } else {
                // Query the ResultSet iterator of the query results. The API operation that you called is a synchronous operation and may occupy a thread until the query succeeds or fails. 
                // Read the results of query jobs in several batches. We recommend that you perform this operation if the amount of data is large. 
                ResultSet resultSet = sqlExecutor.getResultSet();
                while (resultSet.hasNext()) {
                    printRecord(resultSet.next());
                }
            }

            // run another query
            sqlExecutor.run("select * from test_table;", new HashMap<>());
            if(false) {
                // Query the results of all query jobs. The API operation that you called is a synchronous operation and may occupy a thread until the query succeeds or fails. 
                // Write the results of all query jobs to the memory at the same time. To prevent memory issues, we recommend that you do not perform this operation if the amount of data is large. 
                List<Record> records = sqlExecutor.getResult();
                printRecords(records);
            } else {
                // Query the ResultSet iterator of the query results. The API operation that you called is a synchronous operation and may occupy a thread until the query succeeds or fails. 
                // Read the results of query jobs in several batches. We recommend that you perform this operation if the amount of data is large. 
                ResultSet resultSet = sqlExecutor.getResultSet();
                while (resultSet.hasNext()) {
                    printRecord(resultSet.next());
                }
            }
        } catch (OdpsException e) {
            e.printStackTrace();
        } catch (IOException e) {
            e.printStackTrace();
        } finally {
            if (sqlExecutor != null) {
                // Close the SQLExecutor and release related resources. 
                sqlExecutor.close();
            }
        }
    }

    // SQLExecutor can be reused by pool mode
    public static void ExampleWithPool() {
        // Specify the Alibaba Cloud account and project information. 
        Account account = new AliyunAccount("your_access_id", "your_access_key");
        Odps odps = new Odps(account);
        odps.setDefaultProject("your_project_name");
        odps.setEndpoint("http://service.<regionid>.maxcompute.aliyun.com/api");

        // Run query jobs by using a connection pool. 
        SQLExecutorPool sqlExecutorPool = null;
        SQLExecutor sqlExecutor = null;
        try {
            // Create a connection pool. Specify the connection pool size and the default execution mode. 
            SQLExecutorPoolBuilder builder = SQLExecutorPoolBuilder.builder();
            builder.odps(odps)
                    .initPoolSize(1) // init pool executor number
                    .maxPoolSize(5)  // max executors in pool
                    .executeMode(ExecuteMode.INTERACTIVE); // run in interactive mode

            sqlExecutorPool = builder.build();
            // Obtain an SQLExecutor from the connection pool. If no SQLExecutor is obtained from the connection pool and the total number of SQLExecutors does not exceed the upper limit, an SQLExecutor is automatically added. 
            sqlExecutor = sqlExecutorPool.getExecutor();

            // Use the SQLExecutor in the same manner as you use it in the preceding example. 
            sqlExecutor.run("select count(1) from test_table;", new HashMap<>());
            System.out.println("InstanceId:" + sqlExecutor.getId());
            System.out.println("Logview:" + sqlExecutor.getLogView());

            List<Record> records = sqlExecutor.getResult();
            printRecords(records);
        } catch (OdpsException e) {
            e.printStackTrace();
        } catch (IOException e) {
            e.printStackTrace();
        } finally {
            sqlExecutor.close();
        }
        sqlExecutorPool.close();
    }

    private static void printRecord(Record record) {
        for (int k = 0; k < record.getColumnCount(); k++) {

            if (k != 0) {
                System.out.print("\t");
            }

            if (record.getColumns()[k].getType().equals(OdpsType.STRING)) {
                System.out.print(record.getString(k));
            } else if (record.getColumns()[k].getType().equals(OdpsType.BIGINT)) {
                System.out.print(record.getBigint(k));
            } else {
                System.out.print(record.get(k));
            }
        }
    }

    private static void printRecords(List<Record> records) {
        for (Record record : records) {
            printRecord(record);
            System.out.println();
        }
    }

    public static void main(String args[]) {
        SimpleExample();
        ExampleWithPool();
    }
}

Enable MCQA by using MaxCompute Studio

MCQA is supported in MaxCompute Studio V3.5.0 and later. We recommend that you install the latest version of MaxCompute Studio. For more information about how to install MaxCompute Studio, see Install MaxCompute Studio.

In the SQL editor of MaxCompute Studio, set the SQL execution mode to UseShortQueryAcceleration or FallbackWhenFailed and execute query statements to enable MCQA.
  • UseShortQueryAcceleration: Use MCQA to execute query statements.
  • FallbackWhenFailed: Use MCQA to roll back the system to the default offline mode and execute SQL query statements again if the SQL query statements fail to be executed.

Enable MCQA by using PyODPS

MCQA is supported in PyODPS V0.10.7 or later. To enable MCQA, you must use the run_sql_interactive() method when you run SQL statements. Sample statements:
odps = ODPS(...)
instance = odps.run_sql_interactive('select count(*) from test_table')
# logview
print(instance.get_logview_address())
# Optional. If you use Tunnel to download query results, the MaxCompute client does not need to wait for the task instance to be complete. 
# Call the wait_for_success method to wait for the task instance to be complete. You can also call the wait_for_completion method to perform this operation. The difference between the two methods is that an error is returned if the task instance fails when you call the wait_for_success method. 
instance.wait_for_success()
# Download the results.
# Display alerts returned by PyODPS.
print(instance.get_warnings())
# Display results. An error is returned if the task instance fails.
print(instance.get_printable_result())
# Download results from the frontend.
for each_record in instance.open_reader():
    pass # The data type of each record in MaxCompute.
# Use Tunnel to download the results.
for each_record in instance.open_reader(tunnel=True):
    pass
The run_sql_interactive() method can contain the following parameters:
  • service_name: optional. The name of the MCQA session.
  • service_startup_timeout: optional. The timeout period for waiting for a session attach.
  • force_reattach: optional. run_sql_interactive() automatically reuses sessions. The force_reattach parameter is used to forcibly reattach to a session.

Enable MCQA by using SQLAlchemy of PyODPS or by using a third-party tool that supports SQLAlchemy interface

PyODPS is integrated with SQLAlchemy that is used to query data in MaxCompute. To enable MCQA, you must specify the following parameters in the connection string:
  • interactive_mode: required. This parameter specifies whether to enable MCQA. Set this parameter to true.
  • reuse_odps: optional. This parameter specifies whether to forcibly reuse connections. Set this parameter to true. This value indicates that connections are forcibly reused. Connection reuse improves the performance of specific third-party tools, such as Apache SuperSet.
You can configure fallback_policy=<policy1>,<policy2>,... in the connection string to optimize the processing logic. The rollback policies specified by this parameter are similar to the parameters that are configured when you enable MCQA by using the MaxCompute JDBC driver. These policies are used to specify the rollback operation that is performed when query acceleration fails.
  • generic: The default value is False. If this parameter is set to True, the system rolls back to the offline mode when an unknown error occurs.
  • noresource: The default value is False. If this parameter is set to True, the system rolls back to the offline mode when resources are insufficient.
  • upgrading: The default value is False. If this parameter is set to True, the system rolls back to the offline mode during an upgrade.
  • timeout: The default value is False. If this parameter is set to True, the system rolls back to the offline mode when a connection times out.
  • unsupported: The default value is False. If this parameter is set to True, the system switches to the offline mode in scenarios in which MCQA is not supported.
  • default: If you specify this parameter, the system performs a rollback based on the settings of the unsupported, upgrading, noresource, or timeout parameters. If you do not specify fallback_policy in the connection string, the default rollback policy is used.
  • all: The default value is False. If this parameter is set to True, the system rolls back to the offline mode in any of the preceding scenarios. This parameter is supported only for MaxCompute JDBC driver V3.2.3 and later.
You can use the following connection string if you want to enable MCQA and forcible connection reuse, and roll back the system to the offline mode in the scenario of an upgrade, MCQA not supported, or insufficient resources:
odps://<access_id>:<access_key>@<project>/?endpoint=<endpoint>&interactive_mode=true&reuse_odps=true&fallback_policy=unsupported,upgrading,noresource

FAQ

  • Question 1: When I use a JDBC driver to connect to MaxCompute and run an SQL job that uses subscription resources, the error code ODPS-1800001 appears. What do I do? The following information shows the error details:
    Error:com.aliyun.odps.OdpsException: ODPS-1800001: Session exception - Failed to submit sub-query in session because:Prepaid project run out of free query quota.
    • Possible cause:

      MCQA is enabled. This feature is in public preview. If you have purchased a subscription resource plan, you can use this feature free of charge during the public preview. No additional operations are required. When you use this feature free of charge, the maximum number of parallel jobs supported by a MaxCompute project is 5. The number of jobs that can be accelerated free of charge per day is 500. If the number of jobs exceeds 500, the preceding error code appears.

    • Solution:

      Set the alwaysFallback parameter to true to enable MCQA for the JDBC driver. After the configuration is complete, MCQA can accelerate a maximum of 500 query jobs. If the number of jobs exceeds 500, extra MCQA jobs are rolled back to SQL query jobs. For more information about configuration operations and parameter descriptions, see Tools and features for enabling MCQA.

  • Question 2: The duration for sending requests and obtaining results when MCQA is enabled by using PyODPS is longer than the duration when MCQA is enabled by using DataWorks. What do I do?
    • Possible causes:
      • The wait_for_xxx method is used. As a result, the duration for sending requests and obtaining results is prolonged.
      • The polling interval is long.
    • Solutions:
      • If requests can be sent at a fast speed, we recommend that you do not use the wait_for_xxx method. After a request is sent, use Tunnel commands to download the result.
      • Configure instance.wait_for_success(interval=0.1) to reduce the polling interval. Sample statements:
        from odps import ODPS, errors
        
        max_retry_times = 3
        
        def run_sql(odps, stmt):
            retry = 0
            while retry < max_retry_times:
                try:
                    inst = odps.run_sql_interactive(stmt)
                    print(inst.get_logview_address())
                    inst.wait_for_success(interval=0.1)
                    records = []
                    for each_record in inst.open_reader(tunnel=True):
                        records.append(each_record)
                    return records
                except errors.ODPSError as e:
                    retry = retry + 1
                    print("Error: " + str(e) + " retry: " + str(retry) + "/" + str(max_retry_times))
                    if retry >= max_retry_times:
                        raise e
        
        odps = ODPS(...)
        
        run_sql(odps, 'select 1')
  • Question 3: How do I use Logview to troubleshoot Java SDK errors when MCQA is enabled by using MaxCompute SDK?
    Solution: MaxCompute SDK for Java provides a Logview interface. You can run the following command to call the Logview interface to obtain logs:
    String logview = sqlExecutor.getLogView();
  • Question 4: How do I obtain the Logview URL of MaxCompute when MCQA is enabled by using the MaxCompute JDBC driver?

    Solution: The MaxCompute JDBC driver is encapsulated based on MaxCompute SDK for Java. A Logview URL is generated when you execute SQL statements by using the MaxCompute client, MaxCompute Studio, or DataWorks. A Logview URL is also generated when you use the MaxCompute JDBC driver to execute SQL statements. You can view the task status, track the task progress, and obtain task results in Logview. You can configure the log output in the properties.log4j file to obtain the Logview URL. By default, the Logview URL is included in standard output logs that are displayed on the terminal screen.