Kenan
Assistant Engineer
Assistant Engineer
  • UID621
  • Fans1
  • Follows0
  • Posts55
Reads:2476Replies:0

[Others]MaxCompute SQL performance tuning

Created#
More Posted time:Nov 3, 2016 9:39 AM
Some users may encounter a long execution for the submitted jobs (especially for outbound output) and higher occupation of cluster resources when using MaxCompute (formerly known as ODPS) because of their insufficient understanding in the product usage and execution mechanism. In serious cases, the operation may fail, requiring assistance from support personnel and impairing the user's normal business operation. In this article, I summarized some performance tuning methods to support user’s convenient queries and optimize SQL efficiency. Some existing tuning settings that used to require manual operations such as MapJoin, PPD predicate push-down (note the partition location) among others have achieved automated tuning with the constant evolution of products. Although the tuning parameters and details for different stages of products may vary, we can draw inferences from other cases and deepen our understanding after we familiarize ourselves with the tuning ideas and methods.
I. Overall idea
      In general, the tuning can be on multiple layers from the underlying layer to the SQL layer. With the constant improvement of products, some tuning items have been automated. If users are familiar with common tuning mechanisms and execution principles, and are able to make adjustments during SQL statement execution and development, business logic and relevant parameter settings to improve performance, they can halve the work with doubled results.
      1. Tuning at the hardware and OS layers: including disk I/O tuning (multiplexing and so on), network tuning (cache size, amplified connections), memory tuning (virtual memory settings, memory control and so on);
      2. Tuning at the distributed computing platform and storage layers: the storage format settings, compression format settings, RPC settings, connections control settings, scheduling mechanism settings, block and sharding settings, execution resource settings and so on;
      3. Adjustments to the business logic and parameters. Apart from global tuning, tuning the parameters of different types of operations, and modifying the SQL statements for aggregate, join and one-reading and multi-writing or setting different parameters for them can greatly improve the performance;      
      4. Tuning at the SQL statement and application layers: including reconstitution of SQL statements, merging SQL statements, and modifying join of big and small tables to MapJoin. In ODPS 2.0, the MapJoin has been automated. At present, users of the previous versions and users for outbound output still need to modify the SQL statements to support this feature;
      The tuning at different layers and their advantages and disadvantages can be found below:


II. Scenarios and tuning case studies
1. Modify the big and small table join to MapJoin and add the MapJoin hint
Methods and points for attention:
• The total memory usage of all the designated small tables should not exceed 512M.
• When performing join for multiple tables, the two leftmost tables cannot be tables for MapJoin at the same time.
• Different join methods (left/right/inner) have requirements on the sequence of tables.
o left outer join: the left table must be a big table.
o right outer join: the right table must be a big table.
o inner join: the left and right tables can both be big tables.
• full outer join: MapJoin cannot be used directly. You should modify it to MapJoin + Union All.
• The automapjoin has been supported in the latest version released. You can convert small tables into tables for MapJoin automatically according to the sizes of tables for join. Some SQL statements with no indicated addition of hint and SQL statements with small tables in the intermediate results are automatically optimized.
2. Data skews
Manifestations of data skews:
The job progress is stuck at 99% for a long time. Check the monitoring data and only a few reduce sub-jobs are not completed
How large is the difference between the numbers of single reduce logs and average logs when the maximum duration is far longer than the average duration
Optimization methods and points of attention:
• For aggregation skews, set the parameter: set odps.sql.groupby.skewindata=true.
• For join skews, set the parameter: set odps.sql.skewinfo=tab1:(col1,col2)[(v1,v2),(v3,v4),...|(v1,v2),(v3,v4),...];set odps.sql.optimize.skewjoin=true
• For join skews with big and small tables existing at the same time, consider modifying them into MapJoin.
• Case by case: For data skews arising from null aggregation key values, you can modify the null value to a random value, and scatter and distribute the data to different instances for execution.


3. Too many small files - generate dynamic partitions to reduce generation of small files
Methods and points of attention:
• Set parameters: set odps.sql.reshuffle.dynamicpt=true.
• Static partitions are recommended whenever possible.
• Enabling this parameter option will add a level of reduce, consuming additional resources.
• Pay attention to the data skews that may occur during the parameter usage.
• It is recommended to deactivate this parameter (False) when there are few dynamic partitions.
4. Merge many small files
Methods and points of attention:
• How to merge small files:
o ALTER merge mode: alter table … [partition] merge smallfiles;
o SQL merge mode: odps.task.merge.enable=true
• Adjust parameters to manage the merge effects.
o Allow cross-path merge? (false): odps.merge.cross.paths
o Threshold value of small files (32M): odps.merge.smallfile.filesize.threshold
o Merge output file size (500M): odps.merge.maxmerged.filesize.threshold
o Allowed number of files to be merged on a single instance (200): odps.merge.max.filenumber.per.instance
o The maximum number of small files to be merged (50000): odps.merge.max.filenumber.per.instance
o The timeout duration for small file merge (600s): odps.task.merge.wait.for.fuxi.timeout
o The maximum number of partitions allowed to be merged in one submit of SQL statements (20): odps.merge.max.partition.count
o If the above parameters are configured for both the session and the project, the configurations for the session shall govern.
5. One reading and multiple writing at the Map end
Scenarios and optimization:
Read the same physical table for multiple times, execute different operations and write to multiple tables; consider the connections and differences with multiinsert, and whether modifications can be made
Set up temporary tables to achieve concurrency of temporary tables
Points of attention:
• Advantage: It greatly saves the cluster’s computing resources and disk I/O resources.
• Disadvantages: When there are many writes, the performance may be compromised, impairing the overall job execution efficiency.
6. Partition trimming
Scenario: The fact table has many partitions with a large data size
Optimization: Avoid full-table scans, reduce resource waste; pay attention as to whether the partition trimming works (refer to the points of attention: pay attention to the table design and usage and try to make the partition trimming effective)
Points of attention:
• If the partition column in the filter condition contains UDF, the partition trimming works.
• The join condition for table join contains the partition column:
o The partition column appears in the on condition, the partition trimming works.
o The partition column appears in the where condition, the partition trimming for the master table works, but it may fail for other tables.
7. SQL merge
Scenarios:
1. Multiple reads to the same data and the source data is big in size, poor in performance and high in costs
2. Unify the SQL statements associated with the business processes, or make statistics by multiple indicators to filter SQL statements for different data
Optimization methods and points of attention: Modify the SQL statements and merge them into one SQL statement for execution, so as to minimize the reads to the same data source. As a result, one scan can calculate multiple basic statistical items or handle multiple filtering conditions. The adjustments are listed below:
• CASE … WHEN…: Merge the joins for different sub-queries to the same data source
• Dynamic partitions & multiinsert: Insert member statistical information that meets different conditions to different tables or table partitions
• Merge the process SQL statements into one SQL statement for execution
8. Optimize SQL statements using window functions
Window functions:
1. Facilitate flexible analysis and processing
2. Use partition by to open a window, and order by to sort the windows
3. The rows can be utilized to specify the window range
4. Rich window-opening functions


Optimizations and points of attention: Reasonable usage of window functions can reduce the number of joins and improve the operation performance. You need to write complicated SQL functions if window functions are not used for processing, and window-opening functions can facilitate efficient execution to get the expected results.
III. Meanings and default values of related parameters
Guest