An increasing number of multi-language microservice applications have been developed for programming languages such as Python and Node.js. Enterprise Distributed Application Service (EDAS) allows you to develop multi-language microservice applications by using a service mesh. EDAS also provides service governance capabilities, such as application hosting, service discovery, tracing analysis, and load balancing.

Background information

Applications have evolved from the original monolithic architecture to the current microservices architecture. The microservices architecture facilitates application development. However, the microservices architecture increases complexity in service deployment and maintenance. Microservices can be developed in all programming languages. After you deploy a multi-language microservice, you can use multi-language SDKs or service meshes to provide tracing analysis, service discovery, and load balancing for the microservice. In tracing analysis, service discovery, and load balancing, multi-language SDKs are intrusive to applications. However, service meshes are considered to be non-intrusive to applications. Therefore, EDAS uses service meshes to support multi-language microservices.

A service mesh is a dedicated infrastructure layer that implements service-to-service communication. A service mesh provides a stable and reliable method to deliver requests. In this case, you do not need to be concerned about the complex topology of services that comprise a modern, cloud native application. In most cases, a service mesh is implemented as an array of lightweight network proxies that are deployed in combination with application code. In this case, you do not need to be concerned about the status of the application.

Scenario

BookInfo is an application that displays detailed information about a book. The application functions as a filter for an online bookstore. The application displays the description of a book, details such as the International Standard Book Number (ISBN) and the number of pages, and reviews on the book.

BookInfo is a heterogeneous application that consists of several microservices-based applications. These microservices-based applications are compiled in different programming languages. These microservices-based applications form a classic service mesh, which consists of multiple services that are written in different programming languages. Among these services, the Reviews service has multiple versions.

Figure 1. Architecture for services of different programming languages
EDAS示例场景的微服务架构

The BookInfo application consists of four independent services:

  • Productpage: a Python service that calls the Details and Reviews services to generate a page. The Productpage service provides the logon and logout features.
  • Details: a Ruby service that contains book information.
  • Reviews: a Java service that contains book reviews and calls the Ratings service. The Reviews service has the following three versions:
    • Version 1 does not call the Ratings service.
    • Version 2 calls the Ratings service and rates the book with one to five black stars.
    • Version v3 calls the Ratings service and rates the book with one to five red stars.
  • Ratings: a Node.js service that contains ratings generated based on book reviews.

Prerequisites

An image of the sample application is created and uploaded to an Alibaba Cloud image repository. For more information about how to upload an image, see Build a repository and images.

Download link: BookInfo Sample.

Step 1: Create a Container Service Kubernetes cluster

Log on to the Container Service for Kubernetes console and create a Container Service Kubernetes cluster. For more information, see Create a managed Kubernetes cluster.

To create a Serverless Kubernetes cluster, set VPC to Create VPC and set Service Discovery to PrivateZone. This allows the Serverless Kubernetes cluster to use Alibaba Cloud Service Mesh after the cluster is imported to EDAS. If you set VPC to Select Existing VPC, check whether the cluster contains the virtual private cloud (VPC) and vSwitch resources after you create the cluster.

Create a Serverless Kubernetes cluster.

Step 2: Add the Kubernetes cluster to EDAS in the EDAS console

By default, the ack-ahas-sentinel-pilot, ack-arms-pilot, and ack-arms-prometheus components are installed when you add a Kubernetes cluster to EDAS in the EDAS console. The ack-ahas-sentinel-pilot component is an application protection component for throttling and degradation. The ack-arms-pilot component is an Application Real-Time Monitoring Service (ARMS) monitoring component. The ack-arms-prometheus component is a Prometheus monitoring component.

  1. Log on to the EDAS console.
  2. In the left-side navigation pane, choose Resource Management > Container Service Kubernetes Clusters.
  3. In the top navigation bar, select the region of the namespace that you want to manage. From the Namespace drop-down list, select the namespace to which you want to add the cluster. Then, click Synchronize Container Service Kubernetes Cluster.
  4. In the Actions column of the Kubernetes cluster that you want to add, click Import.
  5. In the Import Kubernetes Cluster dialog box, select a namespace from the Namespace drop-down list, turn on Service Mesh, and then click Import.
    Note
    • If no namespace is available, the default namespace is selected.
    • If you do not turn on Service Mesh for a Kubernetes cluster before you add the Kubernetes cluster, you can turn on Service Mesh in the Service Mesh column of the Kubernetes cluster.
    • By default, two internal-facing Server Load Balancer (SLB) instances are created when you turn on Service Mesh. The specifications of the two SLB instances are slb.s1.small. The ports of the two SLB instances, port 6443 and port 15011, are exposed. For more information, see Background information.

      You are charged for the two SLB instances that are automatically created. The specifications of the two SLB instances are slb.s1.small.

    When the Kubernetes cluster is in the Running state and the value of Import Status is Imported, the Kubernetes cluster is added to EDAS.

Step 3: Enable Tracing Analysis

The Istio service mesh allows you to use Tracing Analysis to monitor applications of different programming languages in the EDAS console. The Istio proxy can automatically send span information. However, applications must carry HTTP headers so that the span information can be associated with an individual trace.

Applications must carry the following HTTP headers and pass them from inbound requests to outbound requests:
  • x-request-id
  • x-b3-traceid
  • x-b3-spanid
  • x-b3-parentspanid
  • x-b3-sampled
  • x-b3-flags
  • x-ot-span-context

In this example, only the HTTP headers that are carried in the applications of some services in the sample application are described.

The following HTTP headers are carried in the Python application of the Productpage service:
def getForwardHeaders(request):
    headers = {}

    # x-b3-*** headers can be populated using the opentracing span
    span = get_current_span()
    carrier = {}
    tracer.inject(
        span_context=span.context,
        format=Format.HTTP_HEADERS,
        carrier=carrier)

    headers.update(carrier)

    # ...

    incoming_headers = ['x-request-id']

    # ...

    for ihdr in incoming_headers:
        val = request.headers.get(ihdr)
        if val is not None:
            headers[ihdr] = val

    return headers
The following HTTP headers are carried in the Java application of the Reviews service:
@GET
@Path("/reviews/{productId}")
public Response bookReviewsById(@PathParam("productId") int productId,
                            @HeaderParam("end-user") String user,
                            @HeaderParam("x-request-id") String xreq,
                            @HeaderParam("x-b3-traceid") String xtraceid,
                            @HeaderParam("x-b3-spanid") String xspanid,
                            @HeaderParam("x-b3-parentspanid") String xparentspanid,
                            @HeaderParam("x-b3-sampled") String xsampled,
                            @HeaderParam("x-b3-flags") String xflags,
                            @HeaderParam("x-ot-span-context") String xotspan) {

  if (ratings_enabled) {
    JsonObject ratingsResponse = getRatings(Integer.toString(productId), user, xreq, xtraceid, xspanid, xparentspanid, xsampled, xflags, xotspan);

Step 4: Deploy multi-language applications on the Container Service for Kubernetes cluster

You must deploy the microservices in the sample scenario to EDAS as applications. The following procedure shows how to deploy a microservice.

Note Multi-language applications can only be deployed in image.
  1. Log on to the EDAS console.
  2. In the left-side navigation pane, click Applications. In the top navigation bar of the page that appears, select a region. Select a namespace and click the Create Application tab.
  3. On the Basic Information tab, specify the Cluster Type and Application Runtime Environment parameters, and click Next.
    Parameter Description
    Cluster Type Select Kubernetes Cluster.
    Application Runtime Environment In the Application Runtime Environment section, select Node.js, C++, Go, and Other Languages.
  4. On the Configurations tab, set the environment parameter, basic parameters, and image parameters for the application and click Next.
    Parameter Description
    Namespace The namespace where the imported Kubernetes cluster is located. If you have not created a namespace or do not select a namespace, the default namespace is used.
    Cluster Select the imported Kubernetes cluster from the drop-down list on the right of the page.
    K8s Namespace In this example, default is selected.
    Application Name Enter the name of the application. The name must start with a letter and can contain digits, letters, and hyphens (-). The name can be up to 36 characters in length.
    Application Description Enter the description of the application. The description can be up to 128 characters in length.
    Version You can click Generate Version Number to generate a value or specify a custom version number as prompted on the page.
    Image Type Select Configure Image. Select the region where the image resides and the image that you want to configure. Then, set the image version.
    Total Pods Set the number of pods that you want to deploy on the application.
    Single-pod Resource Quota Specify the amount of CPU and memory resources that you want to reserve for a pod. To set a limit, you must enter a numeric value. The default value 0 indicates that no limit is set.
  5. Configure the advanced settings for the application. After you complete the configuration, click Create Application.
    1. Configure the service mesh.
      Parameter Description
      Network Protocol Select a supported protocol from the drop-down list.
      Service Name Enter the microservice name that is provided by the application. The service name must be the same as the name that is provided in the application code. This ensures that the microservice can be registered and called.
      Service Port Enter the service port that is provided by the application. The service port must be the same as the port that is provided in the application code. This ensures that the service can be registered and called.
    2. Optional:Configure the following advanced settings:
  6. Click Create Application. On the Creation Completed tab, click Confirm Application Creation.
    Several minutes may be required to create the application. During the process, you can click View Details at the top of the page to go to the Change List page. On this page, you can view the deployment progress and related logs. After the application is created, the Application Overview page appears. You can check the status of the instance pod. If the pod is in the Running state, the application is published. You can click the status of the pod to view the advanced parameters of the application instance, such as Deployment, Pods, and Startup Command.
  7. Repeat the preceding steps to deploy other microservices of the sample application.

Verify the result

After the service is deployed, access the main service. The application page displays the description of a book, details such as the ISBN and the number of pages, and reviews on the book. You can also perform logon and logout operations.

  1. In the Access configuration section of the Application Overview page, click the icon next to SLB (Public Network).
  2. In the SLB (Public Network) dialog box, set the parameters to configure an SLB instance and a listener, and click OK.
    1. Select Create SLB from the Select SLB drop-down list.
      If you have already created an SLB instance, you can select the SLB instance from the list.
    2. Select a protocol and click Add Listener on the right of the protocol. Then, set the SLB port to 80, and the container port to 18082.
      Note If you select HTTPS, you must select SSL Certificate.
      It requires about 30 seconds to add an Internet-facing SLB instance. On the right side of SLB (Public Network) in the Access configuration section, the endpoint of the Internet-facing SLB instance appears after the Internet-facing SLB instance is added. The endpoint format is SLB instance IP:port number.
  3. Paste the endpoint of the Internet-facing SLB instance to your browser and press Enter to open the homepage of the sample application (online bookstore).
  4. On the homepage of the client application, click Normal user or Test user.
  5. In the upper part of the page, click Sign in. Specify User Name and Password and click Sign in to log on to the sample application.
    In this example, both User Name and Password are set to admin.Sign in
    In the sample application, you can view the details of a book and reviews on the book.Book details and reviews

Application monitoring

After you deploy the sample application, you can collect key metrics to monitor the health of the application. The metrics include general metrics, service metrics, and system information such as the CPU and memory usage. The general metrics include the total number of requests and the average response time. The services include the services provided by the application and the services on which the application is reliant.

  1. Log on to the EDAS console.
  2. In the left-side navigation pane, click Applications. In the top navigation bar, select a region. In the upper part of the page, select a namespace. On the Applications page, click the name of the desired application.
  3. View metrics that indicate the health of the application.
    1. In the left-side navigation pane, click Application Overview.
    2. On the right side of the Application Overview page, click the Overall Analysis tab.
      On the Overall Analysis tab, you can view the general metrics of the application, such as the total number of requests and the average response time. You can also view service metrics. The services include the services that are provided by the application and the services on which the application is reliant. For more information, see Overview.
    3. On the right side of the Application Overview page, click the Topology Graph tab.
      On the Topology Graph tab, you can view parent components, child components, and their relationships. You can also view metrics such as the number of requests, response time, and error rate. For more information, see Application overview.
  4. Check the system usage of the application.
    1. In the left-side navigation pane, choose Monitor > Prometheus.
    2. On the Prometheus page, set the namespace and Pod parameters in the upper-left corner and select the required time period in the upper-right corner.
      On this page, you can view the system information about the application, such as the IP address, status, containers, CPU usage, and memory usage of the pod. For more information, see View metrics.Prometheus monitoring - system information