All Products
Search
Document Center

Data Lake Analytics - Deprecated:Analyze data with temporary tables

Last Updated:Dec 06, 2018

This topic explains how to analyze data with temporary tables. Building temporary tables allows you to take advantage of the analytic power of Data Lake Analytics (DLA) without the need to create permanent tables.

Examples

You can query data in a temporary table.

  1. SELECT * FROM
  2. TABLE temp_1
  3. (
  4. id INT,
  5. name STRING,
  6. location STRING,
  7. create_date DATE,
  8. create_timestamp TIMESTAMP,
  9. longitude DOUBLE,
  10. latitude DOUBLE
  11. )
  12. ROW FORMAT DELIMITED FIELDS TERMINATED BY '|'
  13. STORED AS TEXTFILE LOCATION 'oss://mybucket102/file.txt'

The preceding is equivalent to the following:

  1. CREATE EXTERNAL TABLE temp_1 (
  2. id INT,
  3. name STRING,
  4. location STRING,
  5. create_date DATE,
  6. create_timestamp TIMESTAMP,
  7. longitude DOUBLE,
  8. latitude DOUBLE
  9. )
  10. ROW FORMAT DELIMITED FIELDS TERMINATED BY '|'
  11. STORED AS TEXTFILE LOCATION 'oss://mybucket102/';
  12. ==Continue to run the following syntax===
  13. select * from temp_1

You can also query the joining of multiple temporary tables.

  1. SELECT temp_1.col1, temp_2.smallint_col
  2. FROM
  3. TABLE temp_1
  4. (
  5. col1 int,
  6. col2 int
  7. )
  8. LOCATION 'oss://mybucket102/file1.txt';
  9. JOIN
  10. TABLE temp_2
  11. (
  12. id INT COMMENT 'default',
  13. bool_col BOOLEAN COMMENT 'default',
  14. tinyint_col TINYINT COMMENT 'default',
  15. smallint_col SMALLINT COMMENT 'default',
  16. int_col INT COMMENT 'default',
  17. bigint_col BIGINT COMMENT 'default',
  18. float_col FLOAT COMMENT 'default',
  19. double_col DOUBLE COMMENT 'default',
  20. date_string_col STRING COMMENT 'default',
  21. string_col STRING COMMENT 'default',
  22. timestamp_col TIMESTAMP COMMENT 'default'
  23. )
  24. ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe'
  25. WITH SERDEPROPERTIES ('field.delim'='|', 'serialization.format'='|')
  26. STORED AS INPUTFORMAT 'org.apache.hadoop.mapred.TextInputFormat'
  27. OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat'
  28. LOCATION 'oss://mybucket102/file2.txt'
  29. TBLPROPERTIES ('recursive.directories'='false')
  30. ON temp_1.col1 = temp_2.id
  31. WHERE temp_2.bool_col = true;

You can directly access OSS data through the preceding SQL queries.