All Products
Search
Document Center

MaxCompute:SQL runtime execution mode for MapReduce

Last Updated:Mar 26, 2026

SQL runtime execution mode runs MapReduce jobs on the MaxCompute SQL engine — gaining access to the cost-based optimizer (CBO), vectorized execution engine, and other SQL engine capabilities — without requiring changes to existing job logic or interfaces.

How it works

When SQL runtime mode is enabled, MaxCompute translates a MapReduce job into an equivalent SQL expression and runs it through the SQL engine pipeline. The job continues to be billed under MapReduce billing rules.

Compared to standard MapReduce jobs, jobs running in SQL runtime execution mode additionally support:

  • Views as input

  • External tables as input

  • Read and write operations on distributed file systems

  • Read and write operations on hash-clustered tables and range-clustered tables

SQL runtime mode also provides:

  • Continuous performance improvements through the CBO and vectorized execution engine

  • A new storage format compression mechanism

  • Dynamic parallelism adjustment for JOIN operations on ultra-large input tables (for example, hash-clustered tables)

  • Failover and persistent volume claim (PVC) stability mechanisms inherited from the SQL engine

  • Detailed execution visibility in LogView and MaxCompute Studio — including execution plans, logical plans, physical plans, compilation information, and job configuration

Prerequisites

Before you begin, ensure that you have:

  • MapReduce jobs compiled using MapReduce APIs (only jobs compiled with MapReduce APIs can run in SQL runtime execution mode — see Overview)

  • An understanding that billing follows MapReduce rules, not SQL rules (see Pay-as-you-go billing for MapReduce jobs)

Configure the execution mode

Set the odps.mr.run.mode parameter to control which engine runs your MapReduce jobs.

ValueBehavior
lotUses the MapReduce engine (default)
sqlUses the SQL engine; returns an error if execution fails
hybridTries the SQL engine first; falls back to the MapReduce engine on failure

Configure the parameter at the project level or session level depending on your scope.

Project level

To apply the setting to all MapReduce jobs in a project, the project administrator runs:

setproject odps.mr.run.mode=<lot/sql/hybrid>;

Session level

To apply the setting to a single job, choose the method based on whether you control the job source code:

ScenarioMethod
You control the source codeSet the parameter in the JobConf object
You have a compiled JAR and no source accessAdd a set statement before the JAR statement

If you control the source code, set the parameter in the job code:

JobConf job = new JobConf();
job.set("odps.mr.run.mode", "hybrid");

If you only have a compiled JAR, add the following statement before the JAR statement in your submission command:

set odps.mr.run.mode=<lot/sql/hybrid>;

Special scenarios

If your job uses StreamJob or SecondarySort, add the corresponding flag in addition to setting odps.mr.run.mode:

Job typeRequired flag
StreamJobset odps.mr.sql.stream.enable=true;
SecondarySortset odps.mr.sql.group.enable=true;

Verify the execution mode

Use LogView to confirm that a job ran in SQL runtime execution mode. For general information about LogView, see Use LogView V2.0 to view job information.

Check the Summary tab

On the Summary tab, look for the Job run engine field:

Job run mode: fuxi job
Job run engine: execution engine

If `Job run engine` is absent, the job ran on the MapReduce engine, not the SQL engine. This is expected behavior, not missing data. For extended MapReduce (MR2) jobs not in SQL runtime mode, the engine value shows as cganjiang. For standard MapReduce jobs not in SQL runtime mode, no engine field is displayed.

Check the SourceXML tab

On the SourceXML tab, LogView shows the SQL expression that MaxCompute generated from the MapReduce job. This is the SQL equivalent of your job logic as submitted from the client. Example:

create temporary function mr2sql_mapper_152955927079392291755 as   'com.aliyun.odps.mapred.bridge.LotMapperUDTF' using ;
create temporary function mr2sql_reducer_152955927079392291755 as 'com.aliyun.odps.mapred.bridge.LotReducerUDTF' using ;

@sub_query_mapper :=
SELECT k_id,v_gmt_create,v_gmt_modified,v_product_id,v_admin_seq,v_sku_attr,v_sku_price,v_sku_stock,v_sku_code,v_sku_image,v_delivery_time,v_sku_bulk_order,v_sku_bulk_discount,v_sku_image_version,v_currency_code
FROM(
  SELECT mr2sql_mapper_152955927079392291755(id,gmt_create,gmt_modified,product_id,admin_seq,sku_attr,sku_price,sku_stock,sku_code,sku_image,delivery_time,sku_bulk_order,sku_bulk_discount,sku_image_version,currency_code ) as (k_id,v_gmt_create,v_gmt_modified,v_product_id,v_admin_seq,v_sku_attr,v_sku_price,v_sku_stock,v_sku_code,v_sku_image,v_delivery_time,v_sku_bulk_order,v_sku_bulk_discount,v_sku_image_version,v_currency_code)
  FROM ae_antispam.product_sku_tt_inc
  WHERE ds = "20180615"  AND hh = "21"
  UNION ALL
  SELECT mr2sql_mapper_152955927079392291755(id,gmt_create,gmt_modified,product_id,admin_seq,sku_attr,sku_price,sku_stock,sku_code,sku_image,delivery_time,sku_bulk_order,sku_bulk_discount,sku_image_version,currency_code ) as (k_id,v_gmt_create,v_gmt_modified,v_product_id,v_admin_seq,v_sku_attr,v_sku_price,v_sku_stock,v_sku_code,v_sku_image,v_delivery_time,v_sku_bulk_order,v_sku_bulk_discount,v_sku_image_version,v_currency_code)
  FROM ae_antispam.product_sku
) open_mr_alias1
DISTRIBUTE BY k_id SORT BY k_id ASC;

@sub_query_reducer :=
SELECT mr2sql_reducer_152955927079392291755(k_id,v_gmt_create,v_gmt_modified,v_product_id,v_admin_seq,v_sku_attr,v_sku_price,v_sku_stock,v_sku_code,v_sku_image,v_delivery_time,v_sku_bulk_order,v_sku_bulk_discount,v_sku_image_version,v_currency_code) as (id,gmt_create,gmt_modified,product_id,admin_seq,sku_attr,sku_price,sku_stock,sku_code,sku_image,delivery_time,sku_bulk_order,sku_bulk_discount,sku_image_version,currency_code)
FROM @sub_query_mapper;
FROM @sub_query_reducer
INSERT OVERWRITE TABLE ae_antispam.product_sku
SELECT id,gmt_create,gmt_modified,product_id,admin_seq,sku_attr,sku_price,sku_stock,sku_code,sku_image,delivery_time,sku_bulk_order,sku_bulk_discount,sku_image_version,currency_code ;

Check the Json Summary tab

The Json Summary tab provides richer detail for SQL runtime jobs than for standard MapReduce jobs. Standard MapReduce shows only Map and Reduce input/output. SQL runtime shows all execution parameters, logical plans, physical plans, and per-phase execution details. Example:

"midlots" :
[
"LogicalTableSink(table=[[odps_flighting.flt_20180621104445_step1_ad_quality_tech_qp_algo_antifake_wordbag_filter_bag_change_result_lv2_20, auctionid,word,match_word(3) {0, 1, 2}]])
OdpsLogicalProject(auctionid=[$0], word=[$1], match_word=[$2])
OdpsLogicalProject(auctionid=[$0], word=[$1], match_word=[$2])
OdpsLogicalProject(auctionid=[$0], word=[$1], match_word=[$2])
OdpsLogicalProject(auctionid=[$2], word=[$3], match_word=[$4])
OdpsLogicalTableFunctionScan(invocation=[[MR2SQL_MAPPER_152955294118813063732($0, $1)]()], rowType=[RecordType(VARCHAR(2147483647) item_id, VARCHAR(2147483647) text, VARCHAR(2147483647) __tf_0_0, VARCHAR(2147483647) __tf_0_1, VARCHAR(2147483647) __tf_0_2)])
OdpsLogicalTableScan(table=[[ad_quality_tech.qp_algo_antifake_wordbag_filter_bag_change_lv2_20, item_id,text(2) {0, 1}]])
]