Compute-optimized instance families provide a 1:2 CPU-to-memory ratio for CPU-intensive workloads such as web servers, databases, batch computing, video encoding, game servers, data analytics, high-performance scientific computing, and machine learning.
The ic5 family uses a 1:1 CPU-to-memory ratio instead of 1:2.
Before you start:
Check instance availability by region: Not all instance types are available in every region.
Select an instance type: Determine which instance family fits your workload, then choose a specific size.
Understand instance type naming: Learn how to read instance type names and specification metrics.
Estimate costs: Use the ECS Price Calculator.
Choose an instance family
Instance naming convention
Instance type names follow a predictable pattern. For example, ecs.c9ae.4xlarge:
| Segment | Meaning | Example |
|---|---|---|
c | Family: compute-optimized | c = compute |
9 | Generation | 9 = latest |
a | Processor vendor | a = AMD, i = Intel (recent generations), blank = Intel (older generations) |
e | Variant | e = performance-enhanced |
4xlarge | Size | 16 vCPUs |
Instance family summary
The following tables list all compute-optimized instance families. Use them to narrow down your options before reading detailed specifications.
Recommended families (x86)
| Instance family | Processor | Architecture | Turbo frequency | eRDMA | Jumbo frames | vTPM | Enclave | Max vCPUs | Max network bandwidth |
|---|---|---|---|---|---|---|---|---|---|
| c9i | Intel Xeon 6 P-cores (Granite Rapids) | CIPU | 3.9 GHz | Yes | Yes | Yes | No | 192 | 64 Gbit/s |
| c9ae | AMD EPYC Turin | CIPU | 3.7 GHz | Yes | Yes | Yes | No | 192 | 100 Gbit/s |
| c9a | AMD EPYC Turin | CIPU | 4.1 GHz | Yes | Yes | Yes | No | 64 | 32 Gbit/s |
| c8i | Intel Xeon Emerald/Sapphire Rapids | CIPU | 3.2 GHz | Yes | Yes | Yes | Yes | 192 | 100 Gbit/s |
| c8ine | Intel Xeon Emerald/Sapphire Rapids | CIPU | 3.2 GHz | No | Yes | Yes | No | 32 | 44 Gbit/s |
| c8a | AMD EPYC Genoa | CIPU | 3.7 GHz | Yes | Yes | Yes | No | 192 | 64 Gbit/s |
| c8ae | AMD EPYC Genoa | CIPU | 3.75 GHz | Yes | Yes | Yes | No | 128 | 64 Gbit/s |
| c7 | Intel Xeon Ice Lake | 3rd-gen SHENLONG | 3.5 GHz | No | Yes | Yes | Yes | 128 | 64 Gbit/s |
| c7a | AMD EPYC MILAN | 3rd-gen SHENLONG | 3.5 GHz | No | No | No | No | 128 | 32 Gbit/s |
| c6e | Intel Xeon Cascade Lake | 3rd-gen SHENLONG | 3.2 GHz | No | No | No | No | 104 | 32 Gbit/s |
| c6 | Intel Xeon Cascade Lake | SHENLONG | 3.2 GHz | No | No | No | No | 104 | 25 Gbit/s |
Recommended families (Arm)
| Instance family | Processor | Architecture | Frequency | eRDMA | Jumbo frames | vTPM | Max vCPUs | Max network bandwidth |
|---|---|---|---|---|---|---|---|---|
| c8y | Yitian 710 | 4th-gen SHENLONG | 2.75 GHz | Yes | Yes | Yes | 128 | 64 Gbit/s |
| c6r | Ampere Altra | 3rd-gen SHENLONG | 2.8 GHz | No | No | No | 64 | 16 Gbit/s |
Not recommended families -- if sold out, use the families listed above instead.
| Instance family | Processor | Upgrade to | Why upgrade |
|---|---|---|---|
| c5 | Intel Xeon Skylake/Cascade Lake | c6, c6e, or c7 | Inconsistent server platform assignment |
| ic5 | Intel Xeon Skylake/Cascade Lake | c6 or c7 | 1:1 CPU-to-memory ratio, IPv4 only, no IPv6 |
| sn1ne | Intel Xeon Broadwell/Skylake/Cascade Lake | c6, c6e, or c7 | Limited disk support (standard SSDs and ultra disks only) |
Match your workload
| Workload | Recommended families | Why |
|---|---|---|
| Web and application servers | c9i, c9ae, c8i, c8a | Latest generation, balanced performance |
| Databases (large/medium) | c9a, c8ae, c7 | High single-core turbo, consistent performance |
| Game servers (MMO frontend) | c9a, c8ae, c7, c7a | High clock speed, ultra-high PPS |
| Data analytics and batch computing | c9ae, c9i, c8a, c8i | High throughput, strong I/O |
| Video encoding and transcoding | c9ae, c9i, c8a | High core count, compute-dense |
| AI training and inference | c9ae, c8a, c8ae | High-frequency AMD, GPU-free ML workloads |
| HPC and scientific computing | c8ae, c9i | Performance-enhanced, eRDMA for low latency |
| Containers and microservices | c8y, c6r | Arm-based, cost-efficient for scale-out |
| Network-intensive (gateways, forwarding) | c8ine | Optimized for high connection counts and PPS |
| Confidential computing | c8i, c7 | Enclave support, trusted boot, Intel TME |
c9ae, compute-optimized instance family
Built on the Cloud Infrastructure Processing Unit (CIPU) architecture with AMD EPYC Turin processors. Uses physical cores (no Hyper-Threading) for consistent computing performance, with adjustable baseline storage and network bandwidth and chip-level security hardening.
Use cases: Big data analytics (Spark, Flink, Elasticsearch), search and recommendation engines, core transactional processing (TP) systems, audio and video transcoding, AI training and inference, and enterprise applications (Java).
Specifications
Compute:
CPU-to-memory ratio: 1:2
AMD EPYC Turin processors with turbo frequency up to 3.7 GHz (physical cores)
Adjustable baseline storage and network bandwidth
For OS compatibility, see AMD instance type and operating system compatibility
Storage:
I/O optimized
Supports the Non-Volatile Memory Express (NVMe) protocol. For details, see NVMe protocol.
Supported disk types: elastic ephemeral disks, Enterprise SSDs (ESSDs), ESSD AutoPL disks, and regional ESSDs. For an overview, see Block Storage.
Smaller instances support burstable disk IOPS and bandwidth. For details, see Storage I/O performance.
Network:
Adjustable baseline network bandwidth
IPv4 and IPv6. For IPv6 details, see IPv6 communication.
Supports eRDMA and jumbo frames
Security:
Instance types
| Instance type | vCPUs | Memory (GiB) | Network baseline/burst bandwidth (Gbit/s) | Packet forwarding rate (PPS) | Connections | NIC queues | ENIs | Private IPv4 addresses per ENI | IPv6 addresses per ENI | Disk baseline IOPS | Disk baseline/burst bandwidth (Gbit/s) |
|---|---|---|---|---|---|---|---|---|---|---|---|
| ecs.c9ae.large | 2 | 4 | 2.5/burstable up to 25 | Up to 1,500,000 | Up to 500,000 | 2 | 3 | 6 | 6 | Up to 200,000 | 2.5/burstable up to 20 |
| ecs.c9ae.xlarge | 4 | 8 | 4/burstable up to 25 | Up to 1,600,000 | Up to 500,000 | 4 | 4 | 15 | 15 | Up to 200,000 | 3/burstable up to 20 |
| ecs.c9ae.2xlarge | 8 | 16 | 6/burstable up to 25 | Up to 2,500,000 | Up to 500,000 | 8 | 4 | 15 | 15 | Up to 200,000 | 4/burstable up to 20 |
| ecs.c9ae.4xlarge | 16 | 32 | 10/burstable up to 25 | Up to 3,200,000 | Up to 500,000 | 16 | 8 | 30 | 30 | Up to 200,000 | 5.5/burstable up to 20 |
| ecs.c9ae.8xlarge | 32 | 64 | 16/burstable up to 25 | Up to 5,000,000 | Up to 1,000,000 | 32 | 8 | 30 | 30 | Up to 200,000 | 8/burstable up to 20 |
| ecs.c9ae.12xlarge | 48 | 96 | 25/none | 7,500,000 | 1,500,000 | 48 | 8 | 30 | 30 | 150,000 | 13/none |
| ecs.c9ae.16xlarge | 64 | 128 | 32/none | 10,000,000 | 2,000,000 | 64 | 8 | 30 | 30 | 200,000 | 16/none |
| ecs.c9ae.24xlarge | 96 | 192 | 50/none | 15,000,000 | 3,000,000 | 64 | 15 | 30 | 30 | 300,000 | 25/none |
| ecs.c9ae.32xlarge | 128 | 256 | 64/none | 20,000,000 | 4,000,000 | 64 | 15 | 30 | 30 | 400,000 | 32/none |
| ecs.c9ae.48xlarge | 192 | 384 | 100/none | 30,000,000 | 6,000,000 | 64 | 15 | 50 | 50 | 600,000 | 50/none |
c9a, compute-optimized instance family
Built on the CIPU architecture with AMD EPYC Turin processors. Delivers consistent computing performance with chip-level security hardening.
Use cases: Large and medium-sized databases, game servers, financial quantization, blockchain, web and application servers, and general-purpose enterprise applications.
Specifications
Compute:
CPU-to-memory ratio: 1:2
AMD EPYC Turin processors with turbo frequency up to 4.1 GHz
For OS compatibility, see AMD instance type and operating system compatibility
Storage:
I/O optimized
NVMe protocol. For details, see NVMe protocol.
Supported disk types: elastic ephemeral disks, ESSDs, ESSD AutoPL disks, and regional ESSDs. For an overview, see Block Storage.
Smaller instances support burstable disk IOPS and bandwidth. For details, see Storage I/O performance.
Network:
IPv4 and IPv6. For details, see IPv6 communication.
Supports eRDMA and jumbo frames
Security: vTPM
Instance types
| Instance type | vCPUs | Memory (GiB) | Network baseline/burst bandwidth (Gbit/s) | Packet forwarding rate (PPS) | Connections | NIC queues | ENIs | Private IPv4 addresses per ENI | IPv6 addresses per ENI | Disk baseline IOPS | Disk baseline/burst bandwidth (Gbit/s) |
|---|---|---|---|---|---|---|---|---|---|---|---|
| ecs.c9a.large | 2 | 4 | 2.5/burstable up to 15 | Up to 1,200,000 | Up to 500,000 | 2 | 3 | 6 | 6 | Up to 110,000 | 2/burstable up to 15 |
| ecs.c9a.xlarge | 4 | 8 | 4/burstable up to 15 | Up to 1,400,000 | Up to 500,000 | 4 | 4 | 15 | 15 | Up to 110,000 | 3/burstable up to 15 |
| ecs.c9a.2xlarge | 8 | 16 | 6/burstable up to 15 | Up to 2,000,000 | Up to 500,000 | 8 | 4 | 15 | 15 | Up to 110,000 | 4/burstable up to 15 |
| ecs.c9a.4xlarge | 16 | 32 | 12/burstable up to 25 | Up to 3,000,000 | Up to 500,000 | 16 | 8 | 30 | 30 | Up to 110,000 | 5/burstable up to 15 |
| ecs.c9a.8xlarge | 32 | 64 | 16/burstable up to 32 | Up to 4,000,000 | Up to 800,000 | 32 | 8 | 30 | 30 | Up to 110,000 | 8/burstable up to 15 |
| ecs.c9a.16xlarge | 64 | 128 | 32/none | 7,500,000 | 1,500,000 | 64 | 8 | 30 | 30 | 120,000 | 16/none |
c9i, compute-optimized instance family
Built on the CIPU architecture with Intel Xeon 6 processors with P-cores (Granite Rapids). Delivers consistent computing performance with chip-level security hardening.
Use cases: Machine learning inference, data analytics, batch computing, video encoding, frontend game servers, high-performance scientific and engineering applications, and web servers.
Specifications
Compute:
CPU-to-memory ratio: 1:2
Intel Xeon Granite Rapids processors: 3.2 GHz base clock, 3.6 GHz all-core turbo frequency, and 3.9 GHz maximum single-core turbo frequency
For OS compatibility, see Intel instance types and operating system compatibility
The system may display different frequencies for this instance. The 3.6 GHz all-core turbo frequency is stable and guaranteed. The 3.9 GHz maximum single-core turbo frequency is a burst capability that depends on the overall CPU load of the physical server and is not covered by the Service-Level Agreement (SLA).
Storage:
I/O optimized
NVMe protocol. For details, see NVMe protocol.
Supported disk types: ESSDs and ESSD AutoPL disks. For an overview, see Block Storage.
Smaller instances support burstable disk IOPS and bandwidth. Storage I/O performance scales with instance size. For details, see Storage I/O performance.
Network:
IPv4 and IPv6. For details, see IPv6 communication.
Supports eRDMA and jumbo frames
Network performance scales with instance size.
Security: vTPM
Instance types
| Instance type | vCPUs | Memory (GiB) | Network baseline/burst bandwidth (Gbit/s) | Packet forwarding rate (PPS) | Connections | NIC queues | ENIs | Private IPv4 addresses per ENI | IPv6 addresses per ENI | Disk baseline/burst IOPS | Disk baseline/burst bandwidth (Gbit/s) |
|---|---|---|---|---|---|---|---|---|---|---|---|
| ecs.c9i.large | 2 | 4 | 2.5/burstable up to 15 | 1,000,000 | Up to 500,000 | 2 | 3 | 6 | 6 | 25,000/burstable up to 200,000 | 2/burstable up to 10 |
| ecs.c9i.xlarge | 4 | 8 | 4/burstable up to 15 | 1,200,000 | Up to 500,000 | 4 | 4 | 15 | 15 | 50,000/burstable up to 200,000 | 2.5/burstable up to 10 |
| ecs.c9i.2xlarge | 8 | 16 | 6/burstable up to 15 | 1,600,000 | Up to 500,000 | 8 | 4 | 15 | 15 | 60,000/burstable up to 200,000 | 4/burstable up to 10 |
| ecs.c9i.3xlarge | 12 | 24 | 10/burstable up to 15 | 2,400,000 | Up to 500,000 | 12 | 8 | 15 | 15 | 80,000/burstable up to 200,000 | 5/burstable up to 10 |
| ecs.c9i.4xlarge | 16 | 32 | 12/burstable up to 25 | 3,000,000 | 500,000 | 16 | 8 | 30 | 30 | 100,000/burstable up to 200,000 | 6/burstable up to 10 |
| ecs.c9i.6xlarge | 24 | 48 | 15/burstable up to 25 | 4,500,000 | 600,000 | 24 | 8 | 30 | 30 | 120,000/burstable up to 200,000 | 7.5/burstable up to 10 |
| ecs.c9i.8xlarge | 32 | 64 | 20/burstable up to 32 | 6,000,000 | 800,000 | 32 | 8 | 30 | 30 | 200,000/burstable up to 300,000 | 10/burstable up to 12 |
| ecs.c9i.12xlarge | 48 | 96 | 25/burstable up to 32 | 9,000,000 | 1,600,000 | 48 | 8 | 30 | 30 | 240,000/burstable up to 320,000 | 12/burstable up to 15 |
| ecs.c9i.16xlarge | 64 | 128 | 28/burstable up to 36 | 12,000,000 | 2,000,000 | 64 | 8 | 30 | 30 | 300,000/burstable up to 400,000 | 16/burstable up to 24 |
| ecs.c9i.24xlarge | 96 | 192 | 32/burstable up to 48 | 18,000,000 | 3,000,000 | 64 | 15 | 30 | 30 | 350,000/burstable up to 600,000 | 20/burstable up to 28 |
| ecs.c9i.32xlarge | 128 | 256 | 36/burstable up to 50 | 20,000,000 | 4,000,000 | 64 | 15 | 30 | 30 | 400,000/burstable up to 650,000 | 24/burstable up to 28 |
| ecs.c9i.48xlarge | 192 | 384 | 64/none | 24,000,000 | 6,000,000 | 64 | 15 | 50 | 50 | 500,000/burstable up to 800,000 | 32/none |
c8a, compute-optimized instance family
Built on the CIPU architecture with chip-level security hardening for consistent computing performance.
Use cases: Big data applications, web applications, AI training and inference, and audio and video transcoding.
Specifications
Compute:
CPU-to-memory ratio: 1:2
2.7 GHz AMD EPYC Genoa processors with turbo frequency up to 3.7 GHz
Hyper-Threading supported and enabled by default. To change this, see Change CPU options.
For OS compatibility, see AMD instance type and operating system compatibility
Storage:
I/O optimized
NVMe protocol. For details, see NVMe protocol.
Supported disk types: elastic ephemeral disks, ESSDs, ESSD AutoPL disks, and regional ESSDs. For an overview, see Block Storage.
Smaller instances support burstable disk IOPS and bandwidth. For details, see Storage I/O performance.
Network:
IPv4 and IPv6. For details, see IPv6 communication.
Supports eRDMA and jumbo frames
Smaller instances support burstable network bandwidth.
Security: vTPM
Instance types
| Instance type | vCPUs | Memory (GiB) | Network baseline/burst bandwidth (Gbit/s) | Packet forwarding rate (PPS) | Connections | NIC queues | ENIs | Private IPv4 addresses per ENI | IPv6 addresses per ENI | Disk baseline/burst IOPS | Disk baseline/burst bandwidth (Gbit/s) |
|---|---|---|---|---|---|---|---|---|---|---|---|
| ecs.c8a.large | 2 | 4 | 1.5/burstable up to 12.5 | 900,000 | Up to 250,000 | 2 | 3 | 6 | 6 | 20,000/burstable up to 110,000 | 1.5/burstable up to 10 |
| ecs.c8a.xlarge | 4 | 8 | 2.5/burstable up to 12.5 | 1,000,000 | Up to 250,000 | 4 | 4 | 15 | 15 | 30,000/burstable up to 110,000 | 2/burstable up to 10 |
| ecs.c8a.2xlarge | 8 | 16 | 4/burstable up to 12.5 | 1,600,000 | Up to 250,000 | 8 | 4 | 15 | 15 | 45,000/burstable up to 110,000 | 2.5/burstable up to 10 |
| ecs.c8a.4xlarge | 16 | 32 | 7/burstable up to 12.5 | 2,000,000 | 300,000 | 16 | 8 | 30 | 30 | 60,000/burstable up to 110,000 | 3.5/burstable up to 10 |
| ecs.c8a.8xlarge | 32 | 64 | 10/burstable up to 25 | 3,000,000 | 600,000 | 32 | 8 | 30 | 30 | 80,000/burstable up to 110,000 | 5/burstable up to 10 |
| ecs.c8a.12xlarge | 48 | 96 | 16/25 | 4,500,000 | 750,000 | 48 | 8 | 30 | 30 | 120,000/none | 8/burstable up to 10 |
| ecs.c8a.16xlarge | 64 | 128 | 20/25 | 6,000,000 | 1,000,000 | 64 | 8 | 30 | 30 | 160,000/none | 10/none |
| ecs.c8a.24xlarge | 96 | 192 | 32/none | 9,000,000 | 1,500,000 | 64 | 15 | 30 | 30 | 240,000/none | 16/none |
| ecs.c8a.32xlarge | 128 | 256 | 40/none | 12,000,000 | 2,000,000 | 64 | 15 | 30 | 30 | 320,000/none | 20/none |
| ecs.c8a.48xlarge | 192 | 384 | 64/none | 18,000,000 | 3,000,000 | 64 | 15 | 50 | 50 | 500,000/none | 32/none |
For ecs.c8a.large and ecs.c8a.xlarge, enable jumbo frames to reach the 12.5 Gbit/s burst network bandwidth.
c8ae, performance-enhanced compute-optimized instance family
Built on the CIPU architecture with chip-level security hardening for consistent computing performance. The "e" variant uses higher-frequency AMD processors for performance-sensitive workloads.
Use cases:
AI workloads: deep learning, training, and inference
High-performance scientific computing (HPC)
Large and medium-sized databases, caches, and search clusters
MMO game servers
Enterprise applications with high performance requirements
Specifications
Compute:
CPU-to-memory ratio: 1:2
3.4 GHz AMD EPYC Genoa processors with single-core turbo frequency up to 3.75 GHz
Hyper-Threading supported and enabled by default. To change this, see Change CPU options.
For OS compatibility, see AMD instance type and operating system compatibility
Storage:
I/O optimized
NVMe protocol. For details, see NVMe protocol.
Supported disk types: ESSDs, ESSD AutoPL disks, and regional ESSDs. For an overview, see Block Storage.
Smaller instances support burstable disk IOPS and bandwidth. For details, see Storage I/O performance.
Network:
IPv4 and IPv6. For details, see IPv6 communication.
Supports eRDMA and jumbo frames
Smaller instances support burstable network bandwidth.
Security: vTPM
Instance types
| Instance type | vCPUs | Memory (GiB) | Network baseline/burst bandwidth (Gbit/s) | Packet forwarding rate (PPS) | vTPM | Connections | NIC queues | ENIs | Private IPv4 addresses per ENI | IPv6 addresses per ENI | Disk baseline/burst IOPS | Disk baseline/burst bandwidth (Gbit/s) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| ecs.c8ae.large | 2 | 4 | 3/burstable up to 15 | 1,000,000 | Yes | Up to 300,000 | 2 | 3 | 6 | 6 | 30,000/burstable up to 200,000 | 2/burstable up to 10 |
| ecs.c8ae.xlarge | 4 | 8 | 4/burstable up to 15 | 1,200,000 | Yes | Up to 300,000 | 4 | 4 | 15 | 15 | 50,000/burstable up to 200,000 | 2.5/burstable up to 10 |
| ecs.c8ae.2xlarge | 8 | 16 | 6/burstable up to 15 | 1,600,000 | Yes | Up to 300,000 | 8 | 4 | 15 | 15 | 60,000/burstable up to 200,000 | 3/burstable up to 10 |
| ecs.c8ae.4xlarge | 16 | 32 | 12/burstable up to 25 | 3,000,000 | Yes | 500,000 | 16 | 8 | 30 | 30 | 100,000/burstable up to 200,000 | 6/burstable up to 10 |
| ecs.c8ae.8xlarge | 32 | 64 | 20/burstable up to 25 | 6,000,000 | Yes | 1,000,000 | 32 | 8 | 30 | 30 | 200,000/none | 10/none |
| ecs.c8ae.16xlarge | 64 | 128 | 32/none | 9,000,000 | Yes | 1,500,000 | 64 | 8 | 30 | 30 | 250,000/none | 16/none |
| ecs.c8ae.32xlarge | 128 | 256 | 64/none | 18,000,000 | Yes | 3,000,000 | 64 | 15 | 30 | 30 | 500,000/none | 32/none |
For ecs.c8ae.large and ecs.c8ae.xlarge, enable jumbo frames to reach the 15 Gbit/s burst network bandwidth.
c8i, compute-optimized instance family
Built on the CIPU architecture with chip-level security hardening for consistent computing performance.
Use cases: Machine learning inference, data analytics, batch computing, video encoding, frontend game servers, high-performance scientific and engineering applications, and web servers.
Specifications
Compute:
CPU-to-memory ratio: 1:2
Intel Xeon Emerald Rapids or Intel Xeon Sapphire Rapids processors with a base clock speed of at least 2.7 GHz and an all-core turbo frequency of 3.2 GHz
Hyper-Threading supported and enabled by default. To change this, see Change CPU options.
For OS compatibility, see Intel instance types and operating system compatibility
The system randomly assigns one of the preceding processors when you purchase an instance. You cannot select a specific processor.
Storage:
I/O optimized
NVMe protocol. For details, see NVMe protocol.
Supported disk types: elastic ephemeral disks, ESSDs, ESSD AutoPL disks, and regional ESSDs. For an overview, see Block Storage.
Smaller instances support burstable disk IOPS and bandwidth. For details, see Storage I/O performance.
Network:
IPv4 and IPv6. For details, see IPv6 communication.
Supports eRDMA and jumbo frames
Smaller instances support burstable network bandwidth.
Security:
Trusted boot based on Trusted Cryptography Module (TCM) or TPM chips. During a trusted boot, all modules in the boot chain from the underlying server to the ECS instance are measured and verified.
Instances with 4 or more vCPUs support Alibaba Cloud virtualized enclave for confidential computing.
Intel Total Memory Encryption (TME)
Instance types
| Instance type | vCPUs | Memory (GiB) | Network baseline/burst bandwidth (Gbit/s) | Packet forwarding rate (PPS) | Connections | NIC queues | ENIs | Private IPv4 addresses per ENI | IPv6 addresses per ENI | Disk baseline/burst IOPS | Disk baseline/burst bandwidth (Gbit/s) |
|---|---|---|---|---|---|---|---|---|---|---|---|
| ecs.c8i.large | 2 | 4 | 2.5/burstable up to 15 | 1,000,000 | Up to 300,000 | 2 | 3 | 6 | 6 | 25,000/burstable up to 200,000 | 2/burstable up to 10 |
| ecs.c8i.xlarge | 4 | 8 | 4/burstable up to 15 | 1,200,000 | Up to 300,000 | 4 | 4 | 15 | 15 | 50,000/burstable up to 200,000 | 2.5/burstable up to 10 |
| ecs.c8i.2xlarge | 8 | 16 | 6/burstable up to 15 | 1,600,000 | Up to 300,000 | 8 | 4 | 15 | 15 | 60,000/burstable up to 200,000 | 4/burstable up to 10 |
| ecs.c8i.3xlarge | 12 | 24 | 10/burstable up to 15 | 2,400,000 | Up to 300,000 | 12 | 8 | 15 | 15 | 80,000/burstable up to 200,000 | 5/burstable up to 10 |
| ecs.c8i.4xlarge | 16 | 32 | 12/burstable up to 25 | 3,000,000 | 350,000 | 16 | 8 | 30 | 30 | 100,000/burstable up to 200,000 | 6/burstable up to 10 |
| ecs.c8i.6xlarge | 24 | 48 | 15/burstable up to 25 | 4,500,000 | 500,000 | 24 | 8 | 30 | 30 | 120,000/burstable up to 200,000 | 7.5/burstable up to 10 |
| ecs.c8i.8xlarge | 32 | 64 | 20/burstable up to 25 | 6,000,000 | 800,000 | 32 | 8 | 30 | 30 | 200,000/none | 10/none |
| ecs.c8i.12xlarge | 48 | 96 | 25/none | 9,000,000 | 1,000,000 | 48 | 8 | 30 | 30 | 300,000/none | 12/none |
| ecs.c8i.16xlarge | 64 | 128 | 32/none | 12,000,000 | 1,600,000 | 64 | 8 | 30 | 30 | 360,000/none | 20/none |
| ecs.c8i.24xlarge | 96 | 192 | 50/none | 18,000,000 | 2,000,000 | 64 | 15 | 30 | 30 | 500,000/none | 24/none |
| ecs.c8i.48xlarge | 192 | 512 | 100/none | 30,000,000 | 4,000,000 | 64 | 15 | 50 | 50 | 1,000,000/none | 48/none |
c8ine, network-enhanced compute-optimized instance family
Built on the CIPU architecture, optimized for network-intensive scenarios with stable computing performance.
Use cases: Network access layer gateways, traffic and data forwarding, pre-processing middleware, large websites, e-commerce platforms, and AI applications as part of a cloud solution.
Specifications
Compute:
CPU-to-memory ratio: 1:2
Intel Xeon Emerald Rapids or Intel Xeon Sapphire Rapids processors with a 2.7 GHz base clock and 3.2 GHz all-core turbo frequency
Hyper-Threading supported and enabled by default. To change this, see Change CPU options.
Storage:
I/O optimized
NVMe protocol. For details, see NVMe protocol.
Supported disk types: ESSDs, ESSD AutoPL disks, and regional ESSDs. For an overview, see Block Storage.
Smaller instances support burstable disk IOPS and bandwidth.
Network:
IPv4 and IPv6. For details, see IPv6 communication.
Supports jumbo frames
Network performance scales with instance size.
Security: vTPM
Instance types
| Instance type | vCPUs | Memory (GiB) | Network bandwidth baseline/burst (Gbit/s) | Connections | NIC queues | ENIs | Private IPv4 addresses per ENI | IPv6 addresses per ENI | EBS multi-queue | Disk baseline/burst IOPS | Disk bandwidth baseline/burst (Gbit/s) |
|---|---|---|---|---|---|---|---|---|---|---|---|
| ecs.c8ine.large | 2 | 4 | 4/Up to 24 | 600,000 | 2 | 3 | 10 | 10 | 1 | 20,000/Up to 80,000 | 2/Up to 8 |
| ecs.c8ine.xlarge | 4 | 8 | 7/Up to 28 | 1,200,000 | 4 | 4 | 15 | 15 | 1 | 40,000/Up to 80,000 | 2.5/Up to 8 |
| ecs.c8ine.2xlarge | 8 | 16 | 12/Up to 35 | 2,000,000 | 8 | 6 | 15 | 15 | 2 | 50,000/Up to 80,000 | 4/Up to 8 |
| ecs.c8ine.4xlarge | 16 | 32 | 23/Up to 44 | 3,500,000 | 16 | 8 | 30 | 30 | 2 | 80,000/Up to 100,000 | 6/Up to 10 |
| ecs.c8ine.8xlarge | 32 | 64 | 44/none | 7,000,000 | 32 | 8 | 30 | 30 | 4 | 100,000/none | 10/none |
c8y, compute-optimized instance family (Arm)
Built on in-house Arm-based Yitian 710 processors and the fourth-generation SHENLONG architecture. Uses fast path acceleration on chips to improve storage, network, and computing performance.
Use cases: Containers, microservices, websites, application servers, video encoding and decoding, HPC, and CPU-based machine learning.
Specifications
Compute:
CPU-to-memory ratio: 1:2
Yitian 710 processors at 2.75 GHz
Storage:
I/O optimized
NVMe protocol. For details, see NVMe protocol.
Supported disk types: elastic ephemeral disks, ESSDs, ESSD AutoPL disks, and regional ESSDs. For an overview, see Block Storage.
Smaller instances support burstable disk IOPS and bandwidth. For details, see Storage I/O performance.
Network:
IPv4 and IPv6. For details, see IPv6 communication.
Supports eRDMA and jumbo frames
Smaller instances support burstable network bandwidth.
Security: vTPM
Instance types
| Instance type | vCPUs | Memory (GiB) | Network baseline/burst bandwidth (Gbit/s) | Packet forwarding rate (PPS) | Connections | NIC queues | ENIs | Private IPv4 addresses per ENI | IPv6 addresses per ENI | Max data disks | Disk baseline/burst IOPS | Disk baseline/burst bandwidth (Gbit/s) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| ecs.c8y.small | 1 | 2 | 1/10 | 500,000 | Up to 250,000 | 1 | 2 | 3 | 3 | 5 | 10,000/burstable up to 110,000 | 1/burstable up to 10 |
| ecs.c8y.large | 2 | 4 | 2/10 | 900,000 | Up to 250,000 | 2 | 3 | 6 | 6 | 8 | 20,000/burstable up to 110,000 | 1.5/burstable up to 10 |
| ecs.c8y.xlarge | 4 | 8 | 3/10 | 1,000,000 | Up to 250,000 | 4 | 4 | 15 | 15 | 8 | 40,000/burstable up to 110,000 | 2/burstable up to 10 |
| ecs.c8y.2xlarge | 8 | 16 | 5/10 | 1,600,000 | Up to 250,000 | 8 | 4 | 15 | 15 | 16 | 50,000/burstable up to 110,000 | 3/burstable up to 10 |
| ecs.c8y.4xlarge | 16 | 32 | 10/25 | 3,000,000 | 400,000 | 16 | 8 | 30 | 30 | 16 | 80,000/burstable up to 110,000 | 5/burstable up to 10 |
| ecs.c8y.8xlarge | 32 | 64 | 16/25 | 5,000,000 | 750,000 | 32 | 8 | 30 | 30 | 16 | 125,000 | 10 |
| ecs.c8y.16xlarge | 64 | 128 | 32/none | 10,000,000 | 1,500,000 | 64 | 8 | 30 | 30 | 32 | 250,000 | 16 |
| ecs.c8y.32xlarge | 128 | 256 | 64/none | 20,000,000 | 3,000,000 | 64 | 15 | 30 | 30 | 32 | 500,000 | 32 |
To use the ecs.c8y.32xlarge instance type, submit a ticket.
c7a, compute-optimized instance family
Built on the third-generation SHENLONG architecture with fast path acceleration on chips for improved storage, network, and computing performance.
Use cases: Video encoding and decoding, high-PPS scenarios (live commenting, telecom data forwarding), web and game frontend servers, DevOps development and testing, data analytics and batch computing, high-performance scientific and engineering applications, and enterprise applications.
Specifications
Compute:
CPU-to-memory ratio: 1:2
2.55 GHz AMD EPYC MILAN processors with single-core turbo frequency up to 3.5 GHz
Hyper-Threading supported and enabled by default. To change this, see Change CPU options.
For OS compatibility, see AMD instance type and operating system compatibility
Storage:
I/O optimized
Supported disk types: ESSDs, ESSD AutoPL disks, and regional ESSDs. For an overview, see Block Storage.
Smaller instances support burstable disk IOPS and bandwidth. For details, see Storage I/O performance.
Network:
IPv4 and IPv6. For details, see IPv6 communication.
Smaller instances support burstable network bandwidth.
Instance types
| Instance type | vCPUs | Memory (GiB) | Network baseline/burst bandwidth (Gbit/s) | Packet forwarding rate (PPS) | Connections | NIC queues | ENIs | Private IPv4 addresses per ENI | IPv6 addresses per ENI | Disk baseline/burst IOPS | Disk baseline/burst bandwidth (Gbit/s) |
|---|---|---|---|---|---|---|---|---|---|---|---|
| ecs.c7a.large | 2 | 4 | 1/burstable up to 10 | 900,000 | Up to 250,000 | 2 | 3 | 6 | 6 | 12,500/burstable up to 110,000 | 1/burstable up to 6 |
| ecs.c7a.xlarge | 4 | 8 | 1.5/burstable up to 10 | 1,000,000 | Up to 250,000 | 4 | 4 | 15 | 15 | 20,000/burstable up to 110,000 | 1.5/burstable up to 6 |
| ecs.c7a.2xlarge | 8 | 16 | 2.5/burstable up to 10 | 1,600,000 | Up to 250,000 | 8 | 4 | 15 | 15 | 30,000/burstable up to 110,000 | 2/burstable up to 6 |
| ecs.c7a.4xlarge | 16 | 32 | 5/burstable up to 10 | 2,000,000 | 300,000 | 8 | 8 | 30 | 30 | 60,000/burstable up to 110,000 | 3/burstable up to 6 |
| ecs.c7a.8xlarge | 32 | 64 | 8/burstable up to 10 | 3,000,000 | 600,000 | 16 | 7 | 30 | 30 | 75,000/burstable up to 110,000 | 4/burstable up to 6 |
| ecs.c7a-nps1.8xlarge | 32 | 64 | 8/burstable up to 10 | 3,000,000 | 600,000 | 16 | 7 | 30 | 30 | 75,000/burstable up to 110,000 | 4/burstable up to 6 |
| ecs.c7a.16xlarge | 64 | 128 | 16/none | 6,000,000 | 1,000,000 | 32 | 7 | 30 | 30 | 150,000/none | 8/none |
| ecs.c7a-nps1.16xlarge | 64 | 128 | 16/none | 3,000,000 | 1,000,000 | 32 | 7 | 30 | 30 | 150,000/none | 8/none |
| ecs.c7a.32xlarge | 128 | 256 | 32/none | 12,000,000 | 2,000,000 | 32 | 15 | 30 | 30 | 300,000/none | 16/none |
Ubuntu 16 and Debian 9 operating system kernels do not support AMD EPYC MILAN processors. Instances created from Ubuntu 16 or Debian 9 images cannot start.
c7, compute-optimized instance family
Built on the third-generation SHENLONG architecture with fast path acceleration on chips for improved storage, network, and computing performance.
Use cases: High-PPS scenarios (live commenting, telecom data forwarding), MMO game frontend servers, web servers, data analytics, batch computing, video encoding, high-performance scientific and engineering applications, secure trusted computing, enterprise applications, and blockchain.
Specifications
Compute:
CPU-to-memory ratio: 1:2
Third-generation Intel Xeon Scalable (Ice Lake) processors with a 2.7 GHz base frequency and 3.5 GHz all-core turbo frequency
Hyper-Threading supported and enabled by default. To change this, see Change CPU options.
Storage:
I/O optimized
Supported disk types: ESSDs, ESSD AutoPL disks, and regional ESSDs. For an overview, see Block Storage.
Smaller instances support burstable disk IOPS and bandwidth. For details, see Storage I/O performance.
Network:
IPv4 and IPv6. For details, see IPv6 communication.
Supports jumbo frames
Smaller instances support burstable network bandwidth.
Security:
Instance types
| Instance type | vCPUs | Memory (GiB) | Network baseline/burst bandwidth (Gbit/s) | Packet forwarding rate (PPS) | vTPM | Connections | NIC queues | ENIs | Private IPv4 addresses per ENI | IPv6 addresses per ENI | Max data disks | Disk baseline/burst IOPS | Disk baseline/burst bandwidth (Gbit/s) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| ecs.c7.large | 2 | 4 | 2/burstable up to 12.5 | 1,100,000 | Yes | Up to 500,000 | 2 | 3 | 6 | 6 | 8 | 20,000/burstable up to 160,000 | 1.5/burstable up to 10 |
| ecs.c7.xlarge | 4 | 8 | 3/burstable up to 12.5 | 1,100,000 | Yes | Up to 500,000 | 4 | 4 | 15 | 15 | 8 | 40,000/burstable up to 160,000 | 2/burstable up to 10 |
| ecs.c7.2xlarge | 8 | 16 | 5/burstable up to 15 | 1,600,000 | Yes | Up to 500,000 | 8 | 4 | 15 | 15 | 16 | 50,000/burstable up to 160,000 | 3/burstable up to 10 |
| ecs.c7.3xlarge | 12 | 24 | 8/burstable up to 15 | 2,400,000 | Yes | Up to 500,000 | 8 | 8 | 15 | 15 | 16 | 70,000/burstable up to 160,000 | 4/burstable up to 10 |
| ecs.c7.4xlarge | 16 | 32 | 10/burstable up to 25 | 3,000,000 | Yes | 500,000 | 8 | 8 | 30 | 30 | 16 | 80,000/burstable up to 160,000 | 5/burstable up to 10 |
| ecs.c7.6xlarge | 24 | 48 | 12/burstable up to 25 | 4,500,000 | Yes | 550,000 | 12 | 8 | 30 | 30 | 16 | 110,000/burstable up to 160,000 | 6/10 |
| ecs.c7.8xlarge | 32 | 64 | 16/burstable up to 32 | 6,000,000 | Yes | 600,000 | 16 | 8 | 30 | 30 | 24 | 160,000/none | 10/none |
| ecs.c7.16xlarge | 64 | 128 | 32/none | 12,000,000 | Yes | 1,200,000 | 32 | 8 | 30 | 30 | 32 | 360,000/none | 16/none |
| ecs.c7.32xlarge | 128 | 256 | 64/none | 24,000,000 | Yes | 2,400,000 | 32 | 15 | 30 | 30 | 32 | 600,000/none | 32/none |
c6r, compute-optimized instance family (Arm)
Built on the third-generation SHENLONG architecture with fast path acceleration on chips for improved storage, network, and computing performance. Uses Arm-based Ampere Altra processors.
Use cases: Containers and microservices, DevOps development and testing, websites and application servers, CPU-based machine learning and inference, and high-performance scientific and engineering applications.
Specifications
Compute:
CPU-to-memory ratio: 1:2
2.8 GHz Ampere Altra processors
Storage:
I/O optimized
Supported disk types: ESSDs, ESSD AutoPL disks, regional ESSDs, standard SSDs, and ultra disks. For an overview, see Block Storage.
For details on storage performance, see Storage I/O performance.
Network:
IPv4 and IPv6. For details, see IPv6 communication.
Smaller instances support burstable network bandwidth.
Instance types
| Instance type | vCPUs | Memory (GiB) | Network baseline/burst bandwidth (Gbit/s) | Packet forwarding rate (PPS) | Connections | NIC queues | ENIs | Private IPv4 addresses per ENI | IPv6 addresses per ENI | Disk baseline IOPS | Disk baseline bandwidth (Gbit/s) |
|---|---|---|---|---|---|---|---|---|---|---|---|
| ecs.c6r.large | 2 | 4 | 1/10 | 900,000 | Up to 250,000 | 2 | 3 | 6 | 1 | 12,500 | 1 |
| ecs.c6r.xlarge | 4 | 8 | 1/10 | 1,000,000 | Up to 250,000 | 4 | 4 | 15 | 1 | 20,000 | 1.5 |
| ecs.c6r.2xlarge | 8 | 16 | 2/10 | 1,600,000 | Up to 250,000 | 8 | 4 | 15 | 1 | 30,000 | 2 |
| ecs.c6r.4xlarge | 16 | 32 | 5/10 | 2,000,000 | 300,000 | 8 | 8 | 30 | 1 | 60,000 | 3 |
| ecs.c6r.8xlarge | 32 | 64 | 8/10 | 3,000,000 | 600,000 | 16 | 7 | 30 | 1 | 75,000 | 4 |
| ecs.c6r.16xlarge | 64 | 128 | 16/none | 6,000,000 | 900,000 | 32 | 8 | 30 | 1 | 150,000 | 8 |
c6a, compute-optimized instance family
Built on the SHENLONG architecture. Offloads virtualization features to dedicated hardware for predictable, consistent performance and reduced overhead.
Use cases: Video encoding and decoding, high-PPS scenarios (live commenting, telecom data forwarding), web frontend servers, MMO game frontend servers, and DevOps development and testing.
Specifications
Compute:
CPU-to-memory ratio: 1:2
2.6 GHz AMD EPYC ROME processors with turbo frequency of 3.3 GHz
Hyper-Threading supported and enabled by default. To change this, see Change CPU options.
For OS compatibility, see AMD instance type and operating system compatibility
Storage:
I/O optimized
Supported disk types: ESSDs, ESSD AutoPL disks, standard SSDs, and ultra disks. For an overview, see Block Storage.
For details on storage performance, see Storage I/O performance.
Network:
IPv4 and IPv6. For details, see IPv6 communication.
Instance types
| Instance type | vCPUs | Memory (GiB) | Network baseline/burst bandwidth (Gbit/s) | Packet forwarding rate (PPS) | Connections | NIC queues | ENIs | Private IPv4 addresses per ENI | IPv6 addresses per ENI | Disk baseline IOPS | Disk baseline bandwidth (Gbit/s) |
|---|---|---|---|---|---|---|---|---|---|---|---|
| ecs.c6a.large | 2 | 4 | 1/10 | 900,000 | Up to 250,000 | 2 | 2 | 6 | 1 | 12,500 | 1 |
| ecs.c6a.xlarge | 4 | 8 | 1.5/10 | 1,000,000 | Up to 250,000 | 4 | 3 | 15 | 1 | 20,000 | 1.5 |
| ecs.c6a.2xlarge | 8 | 16 | 2.5/10 | 1,600,000 | Up to 250,000 | 8 | 4 | 15 | 1 | 30,000 | 2 |
| ecs.c6a.4xlarge | 16 | 32 | 5/10 | 2,000,000 | 300,000 | 8 | 8 | 30 | 1 | 60,000 | 3.1 |
| ecs.c6a.8xlarge | 32 | 64 | 8/10 | 3,000,000 | 600,000 | 16 | 7 | 30 | 1 | 75,000 | 4.1 |
| ecs.c6a.16xlarge | 64 | 128 | 16/none | 6,000,000 | 1,000,000 | 32 | 8 | 30 | 1 | 150,000 | 8.2 |
| ecs.c6a.32xlarge | 128 | 256 | 32/none | 12,000,000 | 2,000,000 | 32 | 15 | 30 | 1 | 300,000 | 16.4 |
c6e, performance-enhanced compute-optimized instance family
Built on the third-generation SHENLONG architecture. Offloads virtualization features to dedicated hardware and uses fast path acceleration on chips for improved storage, network, and computing performance.
Use cases: High-PPS scenarios (live commenting, telecom data forwarding), web frontend servers, MMO game frontend servers, data analytics, batch computing, video encoding, and high-performance scientific and engineering applications.
Specifications
Compute:
CPU-to-memory ratio: 1:2
2.5 GHz Intel Xeon Platinum 8269CY (Cascade Lake) processors with turbo frequency of 3.2 GHz
Hyper-Threading supported and enabled by default. To change this, see Change CPU options.
Storage:
I/O optimized
Supported disk types: ESSDs, ESSD AutoPL disks, and regional ESSDs. For an overview, see Block Storage.
For details on storage performance, see Storage I/O performance.
Network:
IPv4 and IPv6. For details, see IPv6 communication.
For higher concurrent connections and network packet forwarding, consider the g7ne instance family.
Instance types
| Instance type | vCPUs | Memory (GiB) | Network baseline/burst bandwidth (Gbit/s) | Packet forwarding rate (PPS) | Connections | NIC queues | ENIs | Private IPv4 addresses per ENI | IPv6 addresses per ENI | Disk baseline IOPS | Disk baseline bandwidth (Gbit/s) |
|---|---|---|---|---|---|---|---|---|---|---|---|
| ecs.c6e.large | 2 | 4 | 1.2/burstable up to 10 | 900,000 | Up to 250,000 | 2 | 3 | 6 | 1 | 20,000 | 1 |
| ecs.c6e.xlarge | 4 | 8 | 2/burstable up to 10 | 1,000,000 | Up to 250,000 | 4 | 4 | 15 | 1 | 40,000 | 1.5 |
| ecs.c6e.2xlarge | 8 | 16 | 3/burstable up to 10 | 1,600,000 | Up to 250,000 | 8 | 4 | 15 | 1 | 50,000 | 2 |
| ecs.c6e.4xlarge | 16 | 32 | 6/burstable up to 10 | 3,000,000 | 300,000 | 8 | 8 | 30 | 1 | 80,000 | 3 |
| ecs.c6e.8xlarge | 32 | 64 | 10/none | 6,000,000 | 600,000 | 16 | 8 | 30 | 1 | 150,000 | 5 |
| ecs.c6e.13xlarge | 52 | 96 | 16/none | 9,000,000 | 1,000,000 | 32 | 7 | 30 | 1 | 240,000 | 8 |
| ecs.c6e.26xlarge | 104 | 192 | 32/none | 24,000,000 | 1,800,000 | 32 | 15 | 30 | 1 | 480,000 | 16 |
c6, compute-optimized instance family
Built on the SHENLONG architecture. Offloads virtualization features to dedicated hardware for predictable, consistent performance and reduced overhead.
Use cases: High-PPS scenarios (live commenting, telecom data forwarding), web frontend servers, MMO game frontend servers, data analytics, batch computing, video encoding, and high-performance scientific and engineering applications.
Specifications
Compute:
CPU-to-memory ratio: 1:2
2.5 GHz Intel Xeon Platinum 8269CY (Cascade Lake) processors with turbo frequency of 3.2 GHz
Hyper-Threading supported and enabled by default. To change this, see Change CPU options.
Storage:
I/O optimized
Supported disk types: ESSDs, ESSD AutoPL disks, standard SSDs, and ultra disks. For an overview, see Block Storage.
For details on storage performance, see Storage I/O performance.
A single instance in this family can deliver up to 200,000 IOPS.
Network:
IPv4 and IPv6. For details, see IPv6 communication.
Instance types
| Instance type | vCPUs | Memory (GiB) | Network baseline/burst bandwidth (Gbit/s) | Packet forwarding rate (PPS) | Connections | NIC queues | ENIs | Private IPv4 addresses per ENI | IPv6 addresses per ENI | Disk baseline IOPS | Disk baseline bandwidth (Gbit/s) |
|---|---|---|---|---|---|---|---|---|---|---|---|
| ecs.c6.large | 2 | 4 | 1/burstable up to 3 | 300,000 | Up to 250,000 | 2 | 2 | 6 | 1 | 10,000 | 1 |
| ecs.c6.xlarge | 4 | 8 | 1.5/burstable up to 5 | 500,000 | Up to 250,000 | 4 | 3 | 10 | 1 | 20,000 | 1.5 |
| ecs.c6.2xlarge | 8 | 16 | 2.5/burstable up to 8 | 800,000 | Up to 250,000 | 8 | 4 | 10 | 1 | 25,000 | 2 |
| ecs.c6.3xlarge | 12 | 24 | 4/burstable up to 10 | 900,000 | Up to 250,000 | 8 | 6 | 10 | 10 | 30,000 | 2.5 |
| ecs.c6.4xlarge | 16 | 32 | 5/burstable up to 10 | 1,000,000 | 300,000 | 8 | 8 | 20 | 1 | 40,000 | 3 |
| ecs.c6.6xlarge | 24 | 48 | 7.5/burstable up to 10 | 1,500,000 | 450,000 | 12 | 8 | 20 | 1 | 50,000 | 4 |
| ecs.c6.8xlarge | 32 | 64 | 10/none | 2,000,000 | 600,000 | 16 | 8 | 20 | 1 | 60,000 | 5 |
| ecs.c6.13xlarge | 52 | 96 | 12.5/none | 3,000,000 | 900,000 | 32 | 7 | 20 | 1 | 100,000 | 8 |
| ecs.c6.26xlarge | 104 | 192 | 25/none | 6,000,000 | 1,800,000 | 32 | 15 | 20 | 1 | 200,000 | 16 |
Not recommended instance families
The following families are previous-generation or limited-feature families. If they are sold out, use a recommended family listed above (c6, c6e, c7, or newer).
c5, compute-optimized instance family
Use cases: High-PPS scenarios, web frontend servers, MMO game frontend servers, data analytics, batch computing, video encoding, and high-performance scientific and engineering applications.
Specifications:
CPU-to-memory ratio: 1:2
2.5 GHz Intel Xeon Platinum 8163 (Skylake) or 8269CY (Cascade Lake) processors
I/O optimized
Supported disk types: ESSDs, ESSD AutoPL disks, standard SSDs, and ultra disks. For an overview, see Block Storage.
IPv4 and IPv6. For details, see IPv6 communication.
Instances may be deployed on different server platforms. For a consistent platform, use c6, c6e, or c7 instead. A single instance can deliver up to 200,000 IOPS.
Instance types
| Instance type | vCPUs | Memory (GiB) | Network baseline bandwidth (Gbit/s) | Packet forwarding rate (PPS) | NIC queues | ENIs | Private IPv4 addresses per ENI | IPv6 addresses per ENI |
|---|---|---|---|---|---|---|---|---|
| ecs.c5.large | 2 | 4 | 1 | 300,000 | 2 | 2 | 6 | 1 |
| ecs.c5.xlarge | 4 | 8 | 1.5 | 500,000 | 2 | 3 | 10 | 1 |
| ecs.c5.2xlarge | 8 | 16 | 2.5 | 800,000 | 4 | 4 | 10 | 1 |
| ecs.c5.3xlarge | 12 | 24 | 4 | 900,000 | 4 | 6 | 10 | 1 |
| ecs.c5.4xlarge | 16 | 32 | 5 | 1,000,000 | 4 | 8 | 20 | 1 |
| ecs.c5.6xlarge | 24 | 48 | 7.5 | 1,500,000 | 6 | 8 | 20 | 1 |
| ecs.c5.8xlarge | 32 | 64 | 10 | 2,000,000 | 8 | 8 | 20 | 1 |
| ecs.c5.16xlarge | 64 | 128 | 20 | 4,000,000 | 16 | 8 | 20 | 1 |
ic5, compute-intensive instance family
Use cases: Web frontend servers, data analytics, batch computing, video encoding, high-PPS scenarios, and MMO game frontend servers.
Specifications:
CPU-to-memory ratio: 1:1
2.5 GHz Intel Xeon Platinum 8163 (Skylake) or 8269CY (Cascade Lake) processors with an all-core turbo frequency of 2.7 GHz
I/O optimized
Supported disk types: ESSDs, ESSD AutoPL disks, standard SSDs, and ultra disks. For an overview, see Block Storage.
IPv4 only (no IPv6 support)
Instance types
| Instance type | vCPUs | Memory (GiB) | Network baseline bandwidth (Gbit/s) | Packet forwarding rate (PPS) | NIC queues | ENIs | Private IPv4 addresses per ENI |
|---|---|---|---|---|---|---|---|
| ecs.ic5.large | 2 | 2 | 1 | 300,000 | 2 | 2 | 6 |
| ecs.ic5.xlarge | 4 | 4 | 1.5 | 500,000 | 2 | 3 | 10 |
| ecs.ic5.2xlarge | 8 | 8 | 2.5 | 800,000 | 2 | 4 | 10 |
| ecs.ic5.3xlarge | 12 | 12 | 4 | 900,000 | 4 | 6 | 10 |
| ecs.ic5.4xlarge | 16 | 16 | 5 | 1,000,000 | 4 | 8 | 20 |
| ecs.ic5.6xlarge | 24 | 24 | 7.5 | 1,500,000 | 6 | 8 | 20 |
| ecs.ic5.8xlarge | 32 | 32 | 10 | 2,000,000 | 8 | 8 | 20 |
| ecs.ic5.16xlarge | 64 | 64 | 20 | 3,000,000 | 16 | 8 | 20 |
sn1ne, network-enhanced compute-optimized instance family
Use cases: High-PPS scenarios (live commenting, telecom data forwarding), web frontend servers, MMO game frontend servers, data analytics, batch computing, video encoding, and high-performance scientific and engineering applications.
Specifications:
CPU-to-memory ratio: 1:2
2.5 GHz Intel Xeon E5-2682 v4 (Broadwell), Platinum 8163 (Skylake), or 8269CY (Cascade Lake) processors
I/O optimized
Supported disk types: standard SSDs and ultra disks only. For an overview, see Block Storage.
IPv4 and IPv6. For details, see IPv6 communication.
Instances may be deployed on different server platforms. For a consistent platform, use c6, c6e, or c7 instead.
Instance types
| Instance type | vCPUs | Memory (GiB) | Network baseline bandwidth (Gbit/s) | Packet forwarding rate (PPS) | NIC queues | ENIs | Private IPv4 addresses per ENI | IPv6 addresses per ENI |
|---|---|---|---|---|---|---|---|---|
| ecs.sn1ne.large | 2 | 4 | 1 | 300,000 | 2 | 2 | 6 | 1 |
| ecs.sn1ne.xlarge | 4 | 8 | 1.5 | 500,000 | 2 | 3 | 10 | 1 |
| ecs.sn1ne.2xlarge | 8 | 16 | 2 | 1,000,000 | 4 | 4 | 10 | 1 |
| ecs.sn1ne.3xlarge | 12 | 24 | 2.5 | 1,300,000 | 4 | 6 | 10 | 1 |
| ecs.sn1ne.4xlarge | 16 | 32 | 3 | 1,600,000 | 4 | 8 | 20 | 1 |
| ecs.sn1ne.6xlarge | 24 | 48 | 4.5 | 2,000,000 | 6 | 8 | 20 | 1 |
| ecs.sn1ne.8xlarge | 32 | 64 | 6 | 2,500,000 | 8 | 8 | 20 | 1 |