×
Community Blog Understanding Core Services: Compute, Storage, and Networking Simplified

Understanding Core Services: Compute, Storage, and Networking Simplified

The article explains Alibaba Cloud’s core services—compute, storage, and networking—and how they work together to build scalable applications.

Note: In the last blog, we got you up and running. Now we're going to understand the holy trinity of cloud computing: compute, storage, and networking. These three services are the backbone of everything you'll build. Let's make them click.

Introduction

I still remember the moment cloud computing truly "clicked" for me. It was when I realized that every single application whether it's Netflix, Uber, or your cousin's e-commerce store is built on just three fundamental ideas:

  1. Compute: Where your code runs
  2. Storage: Where your data lives
  3. Networking: How compute and storage talk to each other

Everything else in Alibaba Cloud (and any cloud provider) is just sophistication layered on top of these three concepts.

In this blog, we're going to build a real mental model of how these services work together. By the end, you'll understand not just what they are, but how to use them effectively.

Part 1: Compute Where Your Code Lives

What is Compute?

Think of compute as the CPU and RAM of a physical computer, but virtualized and available on demand. On Alibaba Cloud, the main compute service is ECS (Elastic Compute Service).

When you create an ECS instance, you're essentially renting a portion of a physical server. That server has multiple instances running on it (isolation handled by hypervisors), and you only pay for the resources you use.

ECS Instance Types Explained

When you're choosing an instance type, you're making a choice about CPU, RAM, and network performance. Here are the most common types:

Instance Type Best For Example Use Case
General Purpose (t6) Learning, testing, light workloads Your first server, small blog
Compute Optimized (c6) CPU-intensive work Video transcoding, data processing
Memory Optimized (r6) In-memory databases, caching Redis, in-memory analytics
Storage Optimized (i2) High disk I/O Databases with heavy read/write

My personal recommendation for beginners: Start with t6 (general purpose). It's cheap, and you'll learn more by experimenting with different workloads than by choosing the perfect instance type.

Real-World Example: A Web Server

Let's say you're hosting a Node.js web server:

  1. You create a t6.medium ECS instance (2 vCPU, 4GB RAM, ~$30/month)
  2. You SSH into it and install Node.js
  3. You deploy your application
  4. Users connect to the instance's public IP and hit your application
  5. Your application processes requests and returns responses

That's it. That's the compute layer.

Auto-Scaling: Compute Intelligence

Here's where it gets interesting. What happens when your app suddenly gets popular?

Without auto-scaling: Your instance gets overloaded Website slows down, Users leave

With auto-scaling: More users connect Alibaba Cloud detects high CPU, New instances spin up automatically Traffic distributes across instances, Everyone is happy

This is configured through Auto Scaling groups. You set rules like: "If CPU usage > 70% for 5 minutes, add 2 more instances."

Part 2: Storage Where Your Data Lives

Three Storage Paradigms

Storage isn't one-size-fits-all. Alibaba Cloud offers three different storage models:

1. Block Storage (EBS for comparison, but Alibaba Cloud calls it Disk)

What it is: A virtual hard drive attached to your ECS instance

When to use: For operating system, applications, and structured data

Example: Your web server needs a 100GB drive for the OS and application files

How it works:

● You create a disk (say, 100GB)

● You attach it to an ECS instance

● Inside the instance, it appears as /dev/vdb

● You format it, mount it, and use it like a regular hard drive

Cost: Roughly $0.10/GB/month

2. Object Storage (OSS)

What it is: A service for storing any type of file (images, videos, documents, logs, etc.) and accessing it via HTTP/HTTPS

When to use: User uploads, backups, media files, static website content

Key differences from Block Storage:

● Not attached to an instance; accessed over the internet

● Pay for storage + data transfer

● Extremely scalable (no size limits)

● Highly available (your files are replicated automatically)

Real-world example:

User uploads a photo, Your web server, Uploads to OSS, OSS returns a URL, You store the URL in your database and Later, users can download the photo directly from OSS

Cost: Around $0.0225/GB/month (very cheap)

3. Database Storage (RDS, MongoDB, Redis)

What it is: Managed databases that handle storage plus sophisticated data access patterns

When to use: Structured relational data, document stores, or in-memory caching

Example: A customer database in RDS where you query users by email or filter orders by date

Why not just store in Block Storage?

● Databases are optimized for queries, backups, replication

● They handle multi-user access safely

● They provide advanced features like automatic backups

Part 3: Networking How Everything Talks

The Network Stack

Let's build a complete networking picture from bottom to top:

Layer 1: VPC (Virtual Private Cloud)

Your own isolated network on Alibaba Cloud. Think of it as your private apartment building in a huge city.

Key properties:

● Isolated from other customers' VPCs

● You define the IP address ranges (CIDR blocks)

● You create subnets within it

Example VPC setup:

VPC CIDR: 10.0.0.0/16

Public Subnet 1: 10.0.1.0/24 (faces the internet)

Public Subnet 2: 10.0.2.0/24 (faces the internet)

Private Subnet 1: 10.0.10.0/24 (internal only)

Private Subnet 2: 10.0.11.0/24 (internal only)

Layer 2: Subnets

Subdivisions within your VPC. They can be:

Public: Resources here can reach the internet directly

Private: Resources here cannot be reached directly from the internet

Pro pattern: Put your web servers in public subnets, databases in private subnets. Users talk to web servers over the internet; web servers talk to databases over the VPC.

Layer 3: Security Groups

Firewalls at the instance level. Each rule specifies:

Protocol: TCP, UDP, ICMP, or all

Port: Single port or port range

Source/Destination: IP address, CIDR block, or other security groups

Example Security Group for a Web Server:

Direction Protocol Port Source
Inbound TCP 80 0.0.0.0/0 (anywhere)
Inbound TCP 443 0.0.0.0/0 (anywhere)
Inbound TCP 22 Your IP only
Outbound All All Anywhere (default)

This means:

● Anyone can hit HTTP and HTTPS

● Only you can SSH into the instance

● The instance can reach anywhere outbound

Layer 4: Internet Gateway and NAT

Internet Gateway: Allows resources in public subnets to reach the internet

NAT Gateway: Allows resources in private subnets to reach the internet while remaining unreachable from outside

Layer 5: Load Balancer (SLB)

Distributes incoming traffic across multiple instances.

Why? If you have 3 web servers and a user accesses one overloaded instance, they'll have a bad experience. A load balancer ensures traffic distributes fairly.

Part 4: Putting It All Together A Real Application

Let's design a simple web application and see how compute, storage, and networking work together:

The Application: A Photo Sharing Site

User journey:

  1. User logs into your website
  2. User uploads a photo
  3. User views their photo feed

How Requests Flow:

Upload Photo:

  1. User submits form to Load Balancer
  2. Load Balancer sends to Web1 or Web2 (whichever is less busy)
  3. Web server receives photo, Uploads to OSS, Records URL in RDS
  4. Response: "Photo uploaded successfully"

View Feed:

  1. User requests feed from Load Balancer
  2. Web server queries RDS for user's photos (with URLs)
  3. Web server returns HTML with image URLs pointing to OSS
  4. Browser displays images directly from OSS (bypassing web servers!)

Why this design?

Scalability: Add more web servers as traffic grows; they all talk to the same database and storage

High availability: Load balancer detects failed instances; users reroute to healthy ones

Cost efficiency: OSS is cheaper than storing on block storage; database is specialized for queries

Part 5: Performance and Cost Optimization

Caching Layer

In the real world, you'd add one more layer: a cache.

Why?

● Most requests are reads (user viewing feeds repeatedly)

● Database queries are slow

● Cache stores frequently accessed data in RAM

● First request goes to database; subsequent requests hit the cache (10x faster)

Auto-Scaling

Remember how the web tier should scale based on demand? Here's a real setup:

1.  Create an ECS instance with your application

2.  Create an Image from that instance

3.  Create an Auto Scaling group:

  • Min instances: 2
  • Max instances: 10
  • Desired: 2
  • Scale-up rule: Add 2 instances if CPU > 70%
  • Scale-down rule: Remove 1 instance if CPU < 30%

4.  Attach Load Balancer to the Auto Scaling group

Now, when 100 users suddenly hit your site:

● Load spikes

● Auto Scaling detects high CPU

● 3 new instances spin up in ~2 minutes

● Traffic spreads across 5 instances

● Users get consistent fast performance

● When traffic drops, instances scale back down

Part 6: Real-World Cost Breakdown

Let's calculate monthly cost for our photo sharing app at moderate scale:

Component Spec Cost/Month
ECS (Auto Scaling) 2-4 instances, t6.medium $60-120
RDS db.t3.medium, 100GB $80
OSS 500GB stored + 200GB transfer $20
Load Balancer Standard SLB, 1M requests $40
Total ~$200-240

Compare this to traditional server hosting (often $500+/month), and cloud infrastructure already makes sense. Add in the fact that you're paying for capacity, not inventory, and you see why cloud has won.

Part 7: Key Takeaways

By now, you should understand:

Compute (ECS): Where your code runs; can scale automatically

Storage: Block storage for OS/apps, OSS for media, databases for structured data

Networking: VPC isolates you; subnets separate public/private; security groups control access; load balancers distribute traffic

They work together: Web servers (compute) read images from OSS (object storage) and user data from RDS (database storage), all talking through the VPC (networking)

Auto-scaling and caching: Make your infrastructure smart and efficient

What's Next?

Now that you understand these three core services, the next blog dives deeper into:

Designing for scale: How to architect for millions of users

Redundancy and failure: What happens when things break

Global distribution: Serving users across continents

Resources

ECS Documentation

OSS Getting Started

RDS User Guide

VPC Documentation

Auto Scaling Setup Guide

Your Turn: In the comments, describe an application you want to build. I'll help you map out how compute, storage, and networking would work together for it.

Next post: Building Scalable Architectures on Alibaba Cloud

Learn how to handle 1 million concurrent users


Disclaimer: The views expressed herein are for reference only and don't necessarily represent the official views of Alibaba Cloud.

0 0 0
Share on

Farah Abdou

15 posts | 0 followers

You may also like

Comments

Farah Abdou

15 posts | 0 followers

Related Products