When no Server Load Balancer (SLB) instance is available, you can use cloud controller manager (CCM) to automatically create an SLB instance for a LoadBalancer Service and then use CCM to manage the SLB instance. This topic describes how to use an automatically created SLB instance to expose an application. An NGINX application is used as an example.

Precautions

  • CCM configures SLB instances only for Type=LoadBalancer Services. CCM does not configure SLB instances for other types of Services.
    Notice When a Service is changed from Type=LoadBalancer to another type, CCM deletes the configurations that are added to the SLB instance of the Service. As a result, you can no longer use the SLB instance to access the Service.
  • CCM uses a declarative API. CCM automatically updates the configurations of an SLB instance to match the configurations of the exposed Service when specific conditions are met. If you modify the configurations of an SLB instance in the SLB console, CCM may overwrite the changes.
    Notice Do not use the SLB console to modify the configurations of the SLB instance that is created and managed by CCM. Otherwise, the modifications may be overwritten and the Service may become inaccessible.
  • You cannot change the SLB instance of a LoadBalancer Service after the Service is created. To change the SLB instance, you must create a new Service.

SLB resource quotas

  • CCM creates SLB instances for Type=LoadBalancer Services. By default, you can have a maximum of 60 SLB instances within each Alibaba Cloud account. To create more than 60 SLB instances, Submit a ticket.
    Note In the ticket, specify that you want to modify the slb_quota_instances_num parameter to create more SLB instances.
  • CCM automatically creates SLB listeners that use Service ports. By default, each SLB instance supports a maximum of 50 listeners. To create more than 50 listeners for an SLB instance, submit a ticket.
    Note In the ticket, specify that you want to modify the slb_quota_listeners_num parameter to create more listeners for each SLB instance.
  • CCM automatically adds Elastic Compute Service (ECS) instances to backend server groups of an SLB instance based on the Service configurations.
    • By default, an ECS instance can be added to up to 50 backend server groups. To add an ECS instance to more than 50 server groups, Submit a ticket.
      Note In the ticket, specify that you want to modify the slb_quota_backendserver_attached_num parameter to add an ECS instance to more server groups.
    • By default, you can add up to 200 backend servers to an SLB instance. To add more backend servers to an SLB instance, submit a ticket.
      Note In the ticket, specify that you want to modify the slb_quota_backendservers_num parameter to add more backend servers to an SLB instance.

    For more information about SLB resource quotas, see Limits. To query SLB resource quotas, go to the Quota Management page in the SLB console.

Step 1: Deploy a sample application

The following section describes how to use the kubectl command-line tool to deploy an application. For more information about how to deploy an application by using the ACK console, see Create a stateless application by using a Deployment.

  1. Use the following YAML template to create a my-nginx.yaml file:
    apiVersion: apps/v1 # for versions before 1.8.0 use apps/v1beta1
    kind: Deployment
    metadata:
      name: my-nginx    #The name of the sample application. 
      labels:
        app: nginx
    spec:
      replicas: 3       #The number of replicas. 
      selector:
        matchLabels:
          app: nginx     #You must specify the same value in the selector of the Service that is used to expose the application. 
      template:
        metadata:
          labels:
            app: nginx
        spec:
        #  nodeSelector:
        #    env: test-team
          containers:
          - name: nginx
            image: registry.aliyuncs.com/acs/netdia:latest     #Replace the value with the image address. Format: <image_name:tags>. 
            ports:
            - containerPort: 80                                #The port must be exposed in the Service. 
  2. Run the following command to deploy the my-nginx application:
    kubectl apply -f my-nginx.yaml
  3. Run the following command to check the state of the application:
    kubectl get deployment my-nginx

    Sample response:

    NAME       READY   UP-TO-DATE   AVAILABLE   AGE
    my-nginx   3/3     3            3           50s

Step 2: Use an automatically created SLB instance to expose an application

You can use the Container Service for Kubernetes (ACK) console or kubectl to create a LoadBalancer Service. After the Service is created, you use the Service to expose the application.

Use the ACK console

  1. Log on to the ACK console.
  2. In the left-side navigation pane of the ACK console, click Clusters.
  3. On the Clusters page, find the cluster that you want to manage and click the name of the cluster or click Details in the Actions column. The details page of the cluster appears.
  4. In the left-side navigation pane of the details page, choose Network > Services.
  5. On the Services page, click Create in the upper-right corner of the page.
  6. In the Create Service dialog box, set the required parameters.
    service-auto
    Parameter Description
    Name Enter a name for the Service. my-nginx-svc is used in this example.
    Type Select the type of the Service. This parameter determines how the Service is accessed. Choose Server Load Balancer -> Public Access -> Create SLB Instance and click Modify to select a specification for the SLB instance. The default specification Small I (slb.s1.small) is used in this example.
    Backend Select the application that you want to associate with the Service. The my-nginx application is selected in this example. If you do not associate the Service with a backend, no Endpoint object is created. You can manually associate the Service with a backend. For more information, see services-without-selectors.
    External Traffic Policy Select a policy to distribute external network traffic. Local is selected in this example.
    • Local: routes network traffic to only pods on the node where the Service is deployed.
    • Cluster: routes network traffic to pods on other nodes in the cluster.
    Note The External Traffic Policy parameter is available only if you set Type to Node Port or Server Load Balancer.
    Port Mapping Specify a Service port and a container port. The Service port corresponds to the port field in the YAML file and the container port corresponds to the targetPort field in the YAML file. The container port must be the same as the one that is exposed in the backend pod. Both ports are set to 80 in this example.
    Annotations Add one or more annotations to the Service to modify the configuration of the SLB instance. You can select Custom Annotation or Alibaba Cloud Annotation from the Type drop-down list.
    In this example, the billing method is set to pay-by-bandwidth and the maximum bandwidth is set to 2 Mbit/s to limit the amount of traffic that flows through the Service. For more information, see Use annotations to configure load balancing.
    • Type: Alibaba Cloud Annotation
    • Name: In this example, two annotations are created with the following names: service.beta.kubernetes.io/alibaba-cloud-loadbalancer-charge-type and service.beta.kubernetes.io/alicloud-loadbalancer-bandwidth.
    • Value: In this example, the values of the annotations are set to paybybandwidth and 2.
    Label Add one or more labels to the Service. Labels are used to identify the Service.
  7. Click Create.
    On the Services page, you can view the created Service. endpoint
  8. Click 39.106.XX.XX:80 in the External Endpoint column to access the sample application.

Use kubectl

  1. Use the following YAML template to create a my-nginx-svc.yaml file.
    To associate the Service with the backend application, set selector to the value of matchLabels in the my-nginx.yaml file. The value is app: nginx in this example.
    apiVersion: v1
    kind: Service
    metadata:
      labels:
        app: nignx
      name: my-nginx-svc
      namespace: default
    spec:
      ports:
      - port: 80
        protocol: TCP
        targetPort: 80
      selector:
        app: nginx
      type: LoadBalancer
  2. Run the following command to create a Service named my-nginx-svc and use the Service to expose the application:
    kubectl apply -f my-nginx-svc.yaml
  3. Run the following command to verify that the LoadBalancer Service is created:
    kubectl get svc my-nginx-svc

    Sample response:

    NAME           TYPE           CLUSTER-IP    EXTERNAL-IP      PORT(S)        AGE
    my-nginx-svc   LoadBalancer   172.21.XX.XX   39.106.XX.XX     80:30471/TCP   5m
  4. Run the curl <YOUR-External-IP> command to access the sample application. Replace <YOUR-External-IP> with the IP address displayed in the EXTERNAL-IP column.
    curl 39.106.XX.XX

    Sample response:

    <!DOCTYPE html>
    <html>
    <head>
    <title>Welcome to nginx!</title>
    <style>
        body {
            width: 35em;
            margin: 0 auto;
            font-family: Tahoma, Verdana, Arial, sans-serif;
        }
    </style>
    </head>
    <body>
    <h1>Welcome to nginx!</h1>
    <p>If you see this page, the nginx web server is successfully installed and
    working. Further configuration is required.</p>
    
    <p>For online documentation and support please refer to
    <a href="http://nginx.org/">nginx.org</a>.<br/>
    Commercial support is available at
    <a href="http://nginx.com/">nginx.com</a>.</p>
    
    <p><em>Thank you for using nginx.</em></p>
    </body>
    </html>