Product overview
Global Traffic Manager (GTM) helps enterprises achieve nearest access to application services, high concurrency load balancing, health checks for application services, and implements fault isolation or traffic switching based on health check results. This allows enterprises to build multi-active and disaster recovery services flexibly and quickly. Global Traffic Manager
Feature overview
Feature | Description | References |
IPAM pool configuration | IPAM pool is a feature of GTM for managing application service addresses (IP or domain names). An IPAM pool represents a group of IP addresses or domain name addresses that provide the same application service and have the same ISP or regional attributes. A GTM instance can be configured with multiple IPAM pools, making it easy for users from different regions to access different IPAM pools and achieve nearest access. When an IPAM pool is completely unavailable, it can be switched to a backup. | |
Visit strategy | Visit strategy helps enterprises easily manage global traffic. It can set different resolution response IPAM pools for users from different networks or regions according to the traffic scheduling strategy set by the customer, ultimately achieving nearest access and failover effects. Visit strategy includes two types, and an instance can only enable one type of visit strategy.
| |
Health check | Health check mainly monitors the availability status of IP addresses in the IPAM pool in real-time, including ping monitoring, TCP monitoring, and HTTP(S) monitoring. | |
Failover | When health checks discover that the primary IPAM pool collection accessed by users is completely unavailable, the system automatically switches user traffic to the backup address collection. This ensures that when application service addresses fail, the backup IPAM pool collection can respond to user DNS query requests, thereby reducing the risk of business interruption and ensuring stable business operation. The core capability of GTM is "failover", specifically:
| - |
Scenarios
Application service primary-backup disaster recovery
For example, an application service has two IP addresses 1.1.XX.XX and 2.2.XX.XX. Under normal circumstances, users access IP address 1.1.XX.XX. When IP address 1.1.XX.XX fails, you want to switch user traffic to IP address 2.2.XX.XX.
Through GTM, you can create two IPAM pools Pool A and Pool B, add IP addresses 1.1.XX.XX and 2.2.XX.XX to the two IPAM pools respectively, and configure health checks. In the visit strategy configuration, select Pool A as the primary IPAM pool collection and Pool B as the backup IPAM pool collection to implement application service primary-backup IP disaster recovery switching.
Multiple active IPs for application service
For example, an application service has three IP addresses 1.1.XX.XX, 2.2.XX.XX, and 3.3.XX.XX, with all three IP addresses serving users simultaneously. If you want all three IP addresses to be returned in DNS resolution when they are working normally, and when one of the IP addresses fails, to temporarily remove the failed address from the DNS resolution list without returning it to users, and add it back to the DNS resolution list when the failed IP address recovers.
Through GTM, you can create one IPAM pool Pool A containing addresses (1.1.XX.XX, 2.2.XX.XX, and 3.3.XX.XX), select Pool A as the primary IPAM pool collection, and enable and configure health checks to implement multiple active IPs for the application service.
High concurrency application service load balancing
During online sales promotions such as Double 11, enterprises typically expand their business temporarily to handle user access requests that suddenly increase several times. Generally, multiple SLB instances are purchased in the same region, with the expectation of offloading access traffic using different IP addresses.
When using GTM, you only need to set the load balancing policy (IPAM pool) in the primary IPAM pool collection to Return All Addresses, so that each address bears an equal share of user access traffic, achieving load balancing across multiple SLB instances. Alternatively, you can choose Return Addresses by Weight, configuring different weights for each IPAM pool and address, so that each address bears access traffic proportional to its weight.
Access acceleration for different regions
Large or multinational enterprises generally need to provide network services to national or global regions. Because network conditions vary by region, network access is generally affected by factors such as distance. Therefore, enterprises choose to establish service endpoints at core locations in several major regions, allowing users from different regions to access their respective core endpoints for the best access experience.
GTM offers two visit strategy methods, both of which can achieve this:
With a geography-based visit strategy, GTM can return addresses from specified IPAM pool collections to users in different regions, achieving nearest access and access acceleration for global users.
With a latency-based visit strategy, GTM can route end users to the application service cluster with the lowest latency, achieving access acceleration for end users.
How it works
For example, if the website service is www.example.com:
After activating a GTM instance, the system automatically assigns a CNAME access domain name gtm12345678.gtm-000.com.
Add 3 server IP addresses 1.1.XX.XX, 2.2.XX.XX, 3.3.XX.XX to the GTM instance and enable health checks.
CNAME the website service www.example.com to gtm12345678.gtm-000.com.
Architecture flow diagram

Architecture flow description
The terminal queries the application service domain name www.example.com from the local recursive DNS system.
Assuming the local recursive DNS system does not have a cache for www.example.com, the local recursive DNS sends a DNS query request for this domain name to the DNS root server. The DNS root server responds to the local recursive DNS server with the DNS server for .com based on the domain name suffix.
After receiving the .com DNS server address from the DNS root response, the local recursive DNS sends a domain name query request for www.example.com to the .com DNS server. The .com DNS server receives the request and responds to the local recursive DNS server with the DNS server for example.com. If the domain name uses Cloud DNS service, this DNS server is the Cloud DNS server.
After receiving the Cloud DNS server address from the .com DNS server response, the local recursive DNS sends another query request for www.example.com to the Cloud DNS server. When the Cloud DNS server receives the DNS query request, it finds in its database that www.example.com is CNAME'd to the domain name gtm12345678.gtm-000.com, so the Cloud DNS server responds to the local recursive DNS with gtm12345678.gtm-000.com.
After receiving the domain name gtm12345678.gtm-000.com from the Cloud DNS response, the local recursive DNS sends another query for gtm12345678.gtm-000.com to the Global Traffic Manager's DNS server. When Global Traffic Manager receives the request, it responds to the local recursive DNS with the final application service address based on its operating mechanism and pre-configured policies.
The local recursive DNS server uses the IP address obtained from the last query as the final address for www.example.com, returns it to the end user, and caches it locally for direct response to future user queries.
After receiving the IP address from the local recursive DNS server response, the end user directly initiates a network connection to the application service and begins business communication.
Service architecture
Service architecture diagram

Service architecture description
Please refer to the service architecture diagram when reading the following description.
The DNS module in the GTM system resolves end user access to the application service's primary IPAM pool collection and backup IPAM pool collection. In the settings, users from mainland China regions access the application service in the primary IPAM pool collection, while users from overseas regions access the application service in the backup IPAM pool collection, with the two IPAM pool collections set as primary and backup for each other.
The HeathCheck module in the GTM system initiates health probes from multiple regions to multiple application service addresses in the IPAM pool. Health probes can use ping, TCP, or HTTP(S) methods.
When an application service address in the primary IPAM pool collection fails, the HeathCheck module accurately detects the abnormal situation. The HeathCheck module then interacts with the DNS module, and ultimately the DNS module temporarily removes the abnormal address from the application service address list returned to users. If the HeathCheck module detects that the application service address has returned to normal, the DNS module restores this address to the application service address list and returns it to users.
If the primary IPAM pool collection experiences a complete failure, GTM switches the access traffic of users from mainland China regions to the backup IPAM pool collection "secondaryAddresspoolSet" according to the pre-configured backup IPAM pool collection and active IPAM pool collection switching policy. Conversely, if the backup IPAM pool collection experiences a complete failure, GTM switches the access traffic of overseas users to the primary IPAM pool collection "primaryAddressPoolSet".
Therefore, end users can automatically obtain the best application service through the global traffic management system and ensure continuous, uninterrupted user access.
Instance architecture
Global Traffic Manager consists of two parts: the control layer and the resolution layer:
Control layer: The control layer provides services through the console and OpenAPI, mainly implementing functions for adding, deleting, modifying, querying, and storing domain name resolution data, configuration data, monitoring data, log data, etc. The control layer is located in [China (Zhangjiakou)].
Resolution layer: The resolution layer provides services through a cluster of DNS servers deployed globally. The resolution layer receives domain name resolution record data distributed from the control layer and mainly implements the function of responding to query requests for domain name resolution record data. The resolution layer has node coverage in major continents and regions around the world.
Join us
DingTalk group: 36335002029