Managed Service for Prometheus collects metrics from TiDB, PD, TiKV, and TiConsole components and provides pre-built dashboards and alert rules. Set up end-to-end TiDB monitoring without deploying or maintaining a separate Prometheus, Grafana, or Alertmanager stack.
Prerequisites
Before you begin, ensure that you have:
A VPC-connected Managed Service for Prometheus instance. For setup instructions, see Prometheus for ECS
A running TiDB cluster deployed on ECS instances within the VPC
Managed Service for Prometheus vs. self-managed Prometheus
Running self-managed Prometheus for TiDB introduces operational challenges:
Repeated deployments across VPCs. Services spread across multiple isolated VPCs each require an independent Prometheus, Grafana, and Alertmanager installation.
No native ECS service discovery. Open source Prometheus relies on
static_configsor third-party registries. Building ECS-aware service discovery requires custom Go code that calls the Alibaba Cloud ECS POP API and integrates with the Prometheus codebase. This high-effort approach also complicates version upgrades.Limited dashboards and alert rules. Open source Grafana dashboards for TiDB typically display raw metrics without deeper analysis based on TiDB operational best practices. No pre-built alert rule templates are available.
The following table compares the two approaches.
| Capability | Self-managed Prometheus | Managed Service for Prometheus |
|---|---|---|
| Deployment | Deploy Prometheus, Grafana, and Alertmanager on ECS in each VPC | Fully managed, out-of-the-box solution with built-in Grafana and alerting |
| Availability and scalability | Limited by self-managed infrastructure | High availability, high performance, and large data capacity |
| Service discovery | static_configs or third-party registries; no ECS integration | Built-in aliyun_sd_configs with ECS tag-based matching, consistent with labelSelector in Kubernetes |
| Dashboards | Basic open source dashboards | Specialized TiDB dashboard templates built on monitoring best practices |
| Alert rules | Must research and configure rules manually | 30+ pre-configured alert rules with GUI-based customization |
Step 1: Add the TiDB integration
Log in to the ARMS console.
In the left navigation pane, click Integration Center.
In the Database section, click TIDB, then follow the on-screen instructions.
Select the environment type: Kubernetes Environment or ECS (VPC).
Select the target Kubernetes cluster or VPC.
In the Configuration Information section, configure the following parameters and click OK.
Parameter Description Access Name A custom name for this integration TIDB Cluster Name The TiDB cluster name. Use a unique name for each integration to avoid metric collection conflicts and dashboard display errors Namespace A custom namespace PD Server Container Name The PD container name. To match multiple names, separate them with a vertical bar (|) for an OR match (for example, pd|pd1)PD Metrics Port The port that exposes PD metrics to Prometheus PD Metrics Collection Path The HTTP scrape path for PD metrics (typically /metrics)TiDB Server Container Name The TiDB server container name TiDB Metrics Port The port that exposes TiDB metrics to Prometheus TiDB Metrics Collection Path The HTTP scrape path for TiDB metrics (typically /metrics)TiKV Server Container Name The TiKV server container name TiKV Metrics Port The port that exposes TiKV metrics to Prometheus TiKV Metrics Collection Path The HTTP scrape path for TiKV metrics (typically /metrics)Metric collection Interval (units/second) How often Prometheus scrapes TiDB metrics. Default: 30 seconds Verify the integration. After the integration completes, click the Collect Metrics tab to confirm that metrics are being collected from your TiDB components.
The integration appears on the Integration Management page in the ARMS console. This page includes the Integrated Environments, Integrated Addons, and Query Dashboards tabs, where you can check targets, metrics, dashboards, and alerts.
Step 2: View TiDB dashboards
Managed Service for Prometheus includes more than 20 built-in Grafana dashboards for TiDB components -- no separate Grafana installation required.
To access the dashboards:
Go to the Integration Management page and select your TiDB cluster.
Click the Dashboards tab.
Click a dashboard link to open it in Alibaba Cloud Grafana.
Step 3: Configure alert rules
Managed Service for Prometheus automatically creates more than 30 key alert rules for TiDB components. To review and enable them:
Go to the Integration Management page and select your TiDB cluster.
Click the Alert Rules tab.
Review the pre-configured rules, adjust thresholds based on your workload patterns, and enable them.
To add custom alert rules, see Create a Prometheus alert rule.
What's next
Explore the built-in TiDB dashboards to understand cluster health, query performance, and storage metrics
Fine-tune alert thresholds based on your workload patterns
Use ECS tags with
aliyun_sd_configsto dynamically manage scrape targets as your TiDB cluster scales