This topic describes the entry point and features of Logview V2.0. You can use Logview V2.0 to view job execution information.

Overview

Logview V2.0 provides a newly designed UI, increases the loading speed, and delivers the following new features:
  • Provides an interactive directed acyclic graph (DAG) to display the logical architecture of job processing. You can also view the operators of a job.
  • Supports job execution playback.
  • Allows you to view memory usage and CPU utilization by using Fuxi Sensor.

Entry point

After you use the MaxCompute client (odpscmd) to submit a job, the system generates a Logview URL. Enter the Logview URL in the search engine and press Enter. On the Logview UI, click Go to Logview V2.0. URL
Logview V2.0 page URL
No. Section
1 The title and functionality section. For more information, see Title and functionality section.
2 The Basic Info section. For more information, see Basic Info section.
3 The job details section. For more information, see Job details section.

Title and functionality section

This section shows the job ID and job name. The job ID uniquely identifies a MaxCompute job. The job ID is generated when you submit the job. The job name is displayed only if the job is submitted by using an SDK. You can also click the icons on the right of this section to perform operations.
Icon Description
Open icon Open the Logview_detail.txt file that contains job details. The file is saved on your computer.
Back icon Return to the original Logview UI.
Save icon Save the job details as a file to your computer.

Basic Info section

This section shows the basic information about a job.
Parameter Description
MaxCompute Service The endpoint of MaxCompute on which the job runs. For more information, see Endpoints.
Project The name of the MaxCompute project to which the job belongs.
Cloud account The Alibaba Cloud account that is used to submit the job.
Type The type of the job. Valid values: SQL, SQLRT, LOT, XLib, CUPID, AlgoTask, and Graph.
Status The status of the job. Valid values:
  • Success: The job succeeds.
  • Failed: The job fails.
  • Canceled: The job is canceled.
  • Waiting: The job is being processed in MaxCompute but is not submitted to Job Scheduler.
  • Running: The job is being processed in Job Scheduler.
  • Terminated: The job is completed.
Start Time The time at which the job is submitted.
End Time The time at which the job is completed.
Latency The period during which the job is executed.
Progress The progress of the job.
Priority The priority of the job.
Queue The position of the job in the queue in the resource quota group.

Job details section

In the job details section, you can query details about a job. This section consists of the following tabs:
  • Job Details
    • Progress chart
      In the upper part of the Job Details tab, the progress chart of a job is displayed. The progress chart shows the subtask dependencies from three dimensions: Fuxi jobs, Fuxi tasks, and operators. It also provides a series of tools to help you locate issues. The following figure shows the upper part of the Job Details tab. Progress chart
      No. Description
      1 The breadcrumb navigation that is used to switch Fuxi jobs. JOB:_SQL_0_0_0_job_0 is the name of a Fuxi job.
      2 The troubleshooting tool. You can use Progress Chart, Input Heat Chart, Output Heat Chart, TaskTime Heart Chart, and InstanceTime Heart Chart for troubleshooting.
      3 You can click the Refresh icon icon to refresh the job status and click the Full Screen Display icon icon to zoom in or out on the progress chart. You can also click the Help icon icon to obtain MaxCompute Studio documentation and the Switch Level icon icon to switch to the upper level of the job.
      4 The zoom tool.
      5 The Fuxi task. A MaxCompute job consists of one or more Fuxi jobs. Each Fuxi job consists of one or more Fuxi tasks. Each Fuxi task consists of one or more Fuxi instances. If the amount of input data increases, MaxCompute starts more nodes for each task to process the data. A node is equivalent to a Fuxi instance. For example, a simple MapReduce job generates two Fuxi tasks: map task (M1) and reduce task (R2). If an SQL statement is complex, multiple Fuxi tasks may be generated.

      You can view the name of each Fuxi task on the execution page. For example, M1 indicates a map task. The 3 and 9 fields in R4_3_9 indicate that the map task can be executed only after M3 and C9_3 are completed. Similarly, M2_4_9_10_16 indicates that the M2 task can be executed only after R4_3_9, C9_3, R10_1_16, and C16_1 are completed. R/W indicates the numbers of rows that the task reads and writes.

      Click or right-click a task to view the operator dependencies and operator graphs of the task.

      Logview V2.0 provides table dependencies so that you can view the input and output tables.

      6 The Fuxi task playback. You can click the Play icon icon to start or stop the playback. You can drag the progress bar to play at a specific point of time. The start time and end time are displayed on the sides of the progress bar. The playing time is displayed in the middle.
      7 EagleEye.
      Note
      • The playback feature is not applicable to Fuxi tasks that are in the Running state.
      • An AlgoTask job, such as a Machine Learning Platform for AI (PAI) job, contains only one Fuxi task. Therefore, no progress charts are provided for these jobs.
      • For non-SQL jobs, only Fuxi jobs and Fuxi tasks are displayed.
      • If only one Fuxi job exists, the progress chart shows the dependencies among Fuxi tasks. If multiple Fuxi jobs exist, the progress chart shows the dependencies among the Fuxi jobs.
    • Job status
      In the lower part of the Job Details tab, detailed information about the job is displayed. The following figure shows the lower part of the Job Details tab. Result
      No. Description
      1 The Fuxi Jobs tab. You can switch Fuxi jobs on this tab.
      2 The details about the Fuxi tasks of the Fuxi job. Click a Fuxi task to display information about the Fuxi instance of this task. By default, information about the Fuxi instance of the first Fuxi task for the first Fuxi job is displayed.
      For AlgoTask jobs, the Sensor column appears in this section. You can click the sensor that corresponds to a Fuxi task to view the CPU and memory information of the Fuxi instance. For more information, see Fuxi Sensor.
      Note Fuxi Sensor is available in the China (Chengdu), China (Shenzhen), China (Shanghai), China (Hangzhou), China (Zhangjiakou), and China (Beijing) regions.
      3 Logview divides instances into groups based on their status. You can click the number next to Failed to query information about faulty nodes.
      4 Fuxi instances.
      5 StdOut and StdErr. You can view the output messages, error messages, and information that you want to display. You can also download the information.
      6 Debug. You can debug and troubleshoot errors.
  • Result
    This tab shows the result of a job. If a job fails, this tab appears and shows the cause of the failure. The system displays data in plain text or a table based on the response format specified in MaxCompute. You can log on to the MaxCompute client and set odps.sql.select.output.format to specify the format. If you set this parameter to HumanReadable, the result is displayed in plain text. Otherwise, the result is displayed in the CSV format.
    • TableResult
      Note NULL is represented as \N in the table. \N can be distinguished from NULL strings and is compatible with the display mode of Logview V1.0.
      You can perform the following operations on this tab:
      • Select a row to export data in the row.
      • Select the table heading to export data from the current page.
      • In the table heading, click the Drop-down arrow icon and select Select All Data or Deselect All as required.
      • Click export to export data in the CSV format. CSV is a file name extension. You can use a text file or Sublime Text to open CSV files.
    • Plain textResult
  • SourceXML

    This tab shows the source XML information about the job that is submitted to MaxCompute.

  • SQL Script

    This tab shows the SQL script of the job that is submitted to MaxCompute.

  • Summary

    This tab shows the overall information about the job that is submitted to MaxCompute. You can also view the memory usage and CPU utilization of the job on this tab.

  • Json Summary

    This tab shows the overall information about the job in the JSON format.

  • History

    This tab shows the historical information about a Fuxi instance if the instance is run multiple times.

  • Substatus History

    This tab shows the detailed information about a job, including the status code, status description, start time, duration, and end time.

Fuxi Sensor

Fuxi Sensor is a resource view that shows a MaxCompute job in all dimensions. You can use Fuxi Sensor to view the memory usage and CPU utilization of a Fuxi instance. You can also use Fuxi Sensor to locate job issues and analyze job performance. For example, you can use Fuxi Sensor in the following scenarios:
  • If an out-of-memory (OOM) error occurs, analyze the size of memory used.
  • Compare the numbers of requested and used resources to optimize the resource request process.
For example, you can use Fuxi Sensor to view the resource usage of a Fuxi instance.
  • CPU utilization
    The cpu_usage chart has two lines. One indicates the number of CPUs requested (cpu_plan), and the other one indicates the number of CPUs used (cpu_usage). In the y-axis, 400 indicates four processors. CPU
  • Memory usage

    The mem_usage chart has two lines. One indicates the number of memory resources requested (mem_plan), and the other one indicates the number of memory resources used (mem_usage).

    mem_usage contains Resident Set Size (RSS) and PageCache. RSS indicates the memory that is allocated after kernel page faults occur. This applies when you call Malloc to request memory by using non-file mappings. If the memory is insufficient, RSS cannot be reclaimed. PageCache is the memory occupied by the kernel to cache the files that are required by read and write requests, such as log files. If the memory is insufficient, PageCache can be reclaimed.
    • Memory detailsMemory usage
    • RSS usageRSS
    • PageCache usagePage-cache