Deploy a microservice-based multi-language application
Last Updated: May 19, 2022
An increasing number of microservice-based applications use different programming
languages such as Python and Node.js. Enterprise Distributed Application Service (EDAS)
allows you to deploy microservice-based multi-language applications by using a service
mesh. EDAS provides service governance capabilities and the following features for
microservice-based applications: application hosting, service discovery, tracing analysis,
and load balancing.
Background information
Applications have evolved from the original monolithic architecture to the current
microservice architecture. The microservice architecture facilitates application development.
However, the microservice architecture increases complexity in service deployment
and operations and maintenance (O&M). Microservices can be developed in all programming
languages. After you deploy a multi-language microservice, you can use the SDKs for
different programming languages or service meshes to provide features, such as tracing
analysis, service discovery, and load balancing, for the microservice. In tracing
analysis, service discovery, and load balancing, the SDKs for different programming
languages are intrusive to applications, whereas service meshes are considered to
be non-intrusive to applications. Therefore, EDAS uses service meshes to support multi-language
microservices.
A service mesh is a dedicated infrastructure layer that implements service-to-service
communication. A service mesh provides a stable and reliable method to deliver requests.
In this case, you do not need to be concerned about the complex topology of services
that compose a modern, cloud-native application. In most cases, a service mesh is
implemented as an array of lightweight network proxies that are deployed in combination
with application code. In this case, you do not need to be concerned about the status
of the application.
Scenario
BookInfo is an application that displays the detailed information about a book. The
application functions as a filter for an online bookstore. The application displays
the description of a book, details such as the International Standard Book Number
(ISBN) and the number of pages, and reviews on the book.
BookInfo is a heterogeneous application that consists of several microservice-based
applications. These microservice-based applications are compiled in different programming
languages. These microservice-based applications form a classic service mesh, which
consists of multiple services that are written in different programming languages.
Among these services, the Reviews service has multiple versions.
Figure 1. Architecture for multi-language services
The BookInfo application consists of the following four independent services:
Productpage: a Python service that calls the Details and Reviews services to generate
a page. The Productpage service provides the logon and logout features.
Details: a Ruby service that contains book information.
Reviews: a Java service that contains book reviews and calls the Ratings service.
The Reviews service has the following three versions:
Version 1, which does not call the Ratings service.
Version 2, which calls the Ratings service and rates a book with one to five black
stars.
Version, which calls the Ratings service and rates a book with one to five red stars.
Ratings: a Node.js service that contains ratings generated based on book reviews.
Prerequisites
An image of the sample application is created and uploaded to your Alibaba Cloud image
repository. For more information about how to upload an image, see Create a repository and build images.
You can download the sample application from Bookinfo Sample.
To create a serverless Kubernetes cluster, set the VPC parameter to Create VPC and set the Service Discovery parameter to PrivateZone. This allows the serverless Kubernetes cluster to use Alibaba Cloud Service Mesh
after the cluster is imported to EDAS. If you set the VPC parameter to Select Existing
VPC, check whether the cluster contains virtual private cloud (VPC) and vSwitch resources
after you create the cluster.
Step 2: Import the ACK cluster to EDAS in the EDAS console
By default, the ack-ahas-sentinel-pilot, ack-arms-pilot, and ack-arms-prometheus components
are installed when you import an ACK cluster to EDAS in the EDAS console. The ack-ahas-sentinel-pilot
component is an application protection component for throttling and degradation. The
ack-arms-pilot component is an Application Real-Time Monitoring Service (ARMS) monitoring
component. The ack-arms-prometheus component is a Prometheus monitoring component.
In the left-side navigation pane, choose Resource Management > Container Service Kubernetes Clusters.
In the top navigation bar, select the region of the microservice namespace that you
want to manage. From the Microservice Namespace drop-down list, select the namespace
to which you want to import the cluster. Then, click Synchronize Container Service Kubernetes Cluster.
In the Actions column of the ACK cluster that you want to import, click Import.
In the Import Kubernetes Cluster dialog box, select a microservice namespace from the Microservice Namespaces drop-down list, turn on Service Mesh, and then click Import.
Note
If you have not created a microservice namespace, the default microservice namespace
is selected.
If you do not enable the integration with ASM when you import the ACK cluster, you
can enable the integration in the Service Mesh column after the cluster is imported.
By default, two internal-facing Server Load Balancer (SLB) instances are created when
you enable the integration with ASM. The specifications of the two SLB instances are
slb.s1.small. The ports of the two SLB instances, port 6443 and port 15011, are exposed.
For more information, see the "Background information" section of the Create an ASM instance topic.
You are charged for the two SLB instances that are automatically created.
If the ACK cluster is in the Running state and the value in the Import Status column of the ACK cluster is Imported, the ACK cluster is imported to EDAS as expected.
Step 3: Enable the integration with Tracing Analysis
The Istio service mesh allows you to use Tracing Analysis to monitor multi-language
applications in the EDAS console. The Istio proxy can automatically send span information.
However, applications must carry HTTP headers so that the span information can be
associated with an individual trace.
Applications must carry the following HTTP headers and pass them from inbound requests
to outbound requests:
x-request-id
x-b3-traceid
x-b3-spanid
x-b3-parentspanid
x-b3-sampled
x-b3-flags
x-ot-span-context
In this example, only the HTTP headers that are carried in the applications of some
services in the sample multi-language application are described.
The following HTTP headers are carried in the Python application of the Productpage
service:
def getForwardHeaders(request):
headers = {}
# x-b3-*** headers can be populated using the opentracing span
span = get_current_span()
carrier = {}
tracer.inject(
span_context=span.context,
format=Format.HTTP_HEADERS,
carrier=carrier)
headers.update(carrier)
# ...
incoming_headers = ['x-request-id']
# ...
for ihdr in incoming_headers:
val = request.headers.get(ihdr)
if val is not None:
headers[ihdr] = val
return headers
The following HTTP headers are carried in the Java application of the Reviews service:
In the left-side navigation pane, click Applications. In the top navigation bar, select a region. Then, select a microservice namespace
and click Create Application.
In the Basic Information step, set the parameters in the Cluster Type and Application Runtime Environment
sections and click Next.
GUI element
Description
Cluster Type
The type of the cluster where you want to deploy the application. Select Kubernetes Clusters.
Application Runtime Environment
The runtime environment of the application. Set the Hosted Applications parameter to Node.js, C++, Go, and Other Languages.
In the Configurations step, configure the environment information, basic information, and image settings
for the application, and click Next.
Parameter
Description
Microservice Namespaces
The microservice namespace of the ACK cluster. If you have not created a microservice
namespace or do not select a microservice namespace, this parameter is set to Default.
If you have not created a microservice namespace or you want to create another microservice namespace, click Create Microservice Namespace to create a microservice namespace. For more information, see the "Create a namespace" section of the Manage microservice namespaces topic.
Cluster
The cluster where you want to deploy the application. Select the imported ACK cluster
from the Cluster drop-down list.
If the selected Kubernetes cluster is not imported to EDAS, select This cluster is used for the first time in EDAS. If you select this check box, the
cluster is imported to EDAS when an application is created. This consumes a certain
amount of time. Then, check whether Alibaba Cloud Service Mesh is activated.
Note You can select a cluster that is not in the microservice namespace where you want
to deploy the application.
K8s Namespace
The Kubernetes namespace of the cluster. Internal system objects are allocated to
different Kubernetes namespaces to form logically isolated projects, groups, or user
groups. This way, different groups can be separately managed and can also share the
resources of the entire cluster. Valid values:
default: the default Kubernetes namespace. If no Kubernetes namespace is specified for an
object, the default Kubernetes namespace is used.
kube-system: the Kubernetes namespace of the objects that are created by the system.
kube-public: the Kubernetes namespace that is automatically created by the system. This Kubernetes
namespace can be read by all the users, including the users who are not authenticated.
In this example, default is selected.
If you want to create a custom Kubernetes namespace, click Create Kubernetes Namespace. In the dialog box that appears, enter a name for the Kubernetes namespace in the
K8s Namespace field. The name can contain digits, lowercase letters, and hyphens (-), and can be
1 to 63 characters in length. It must start and end with a letter or a digit.
Application Name
The name of the application. The name must start with a letter and can contain digits,
letters, and hyphens (-). The application name can be up to 36 characters in length.
Application Description
The description of the application. The description can be up to 128 characters in
length.
Version
The version of the application. You can click Generate Version Number on the right side of the page to generate a version number. You can also specify
a version number based on the prompts on the page.
Image Type
The type of the image that is used to deploy the application. Set this parameter to
Configure Image. Then, select the image that you want to use and the region where the image resides
and specify an image version.
Total Pods
The number of pods on which the application is to be deployed.
Single-pod Resource Quota
The amount of CPU and memory resources that you want to reserve for a pod. To set
a limit, enter a numeric value. The default value 0 indicates that no limit is set.
Configure the advanced settings for the application. Then, click Create Application.
Configure the service mesh.
Parameter
Description
Protocol
The protocol to be used. Select a supported protocol from the drop-down list.
Service Name
The name of the microservice that is provided by the application. The service name
must be the same as that provided in the application code. This ensures that the microservice
can be registered and called.
Service Port
The port of the microservice that is provided by the application. The service port
must be the same as that provided in the application code. This ensures that the microservice
can be registered and called.
Optional:Configure the following advanced settings:
After the settings are configured, click Create Application. In the Creation Completed step, click Create Application.
You may need to wait several minutes before the application is deployed. During this
process, you can click View Details in the upper part of the page to go to the Change List page. On the Change List page, you can view the deployment progress and related logs.
After the application is deployed, the Application Overview page appears. On this page, you can check the status of the pod. If the pod is in
the Running state, the application is published. You can click the state of the pod to view the
advanced settings of the application instance, such as Deployment, Pods, and Startup Command.
Repeat the preceding steps to deploy other microservices of the sample application.
Verify the results
After the application is deployed, access the main service. The application page displays
the description of a book, details such as the ISBN and the number of pages, and reviews
on the book. You can also perform logon and logout operations.
In the Access configuration section of the Application Overview page, click the icon next to SLB (Public Network).
In the SLB (Public Network) dialog box, set the parameters to configure an SLB instance and a listener, and click
OK.
Select Create SLB from the Select SLB drop-down list.
If you have created an SLB instance, you can select the SLB instance from the list.
Select a protocol and click Add Listener on the right of the protocol. Then, set the SLB port to 80, and the container port to 18082.
Note If you select HTTPS, you must also set the SSL Certificate parameter.
The Internet-facing SLB instance requires about 30 seconds to be added or created.
On the right side of SLB (Public Network) in the Access configuration section, the endpoint of the Internet-facing SLB instance appears after the instance
is added or created. The endpoint format is SLB instance IP address:Port number.
Paste the endpoint of the Internet-facing SLB instance to your browser and press Enter
to open the homepage of the sample application (online bookstore).
On the homepage of the client application, click Normal user or Test user.
In the upper part of the page, click Sign in. Enter values in the User Name and Password fields and click Sign in to log on to the sample application.
In this example, both the User Name and Password parameters are set to admin.
In the sample application, you can view the details of a book and reviews on the book.
Implement application monitoring
After you deploy the sample application, you can collect key metrics to monitor the
health status of the application. The metrics include general metrics, services, and
system information such as the CPU and memory usage. The general metrics include the
total number of requests and the average response time. The services include the services
provided by the application and the services on which the application depends.
In the left-side navigation pane, click Applications. In the top navigation bar, select a region. In the upper part of the page, select
a namespace. On the Applications page, click the name of the desired application.
View the metrics that indicate the health status of the application.
In the left-side navigation pane, click Application Overview.
On the right side of the Application Overview page, click the Overall Analysis tab.
On the Overall Analysis tab, you can view the general metrics of the application, such as the total number
of requests and the average response time. You can also view the services that are
provided by the application and the services on which the application depends. For
more information, see Overall tab.
On the right side of the Application Overview page, click the Topology Graph tab.
On the Topology Graph tab, you can view the parent components and child components of the application,
and the relationships among the components. You can also view metrics such as the
number of requests, response time, and error rate. For more information, see Topology tab.
Check the system resource usage of the application.
In the left-side navigation pane, choose Monitor > Prometheus.
On the Prometheus page, set the namespace and Pod parameters in the upper-left corner and select the required time range in the upper-right
corner.
On this page, you can view the system information about the application, such as the
IP address, status, containers, CPU utilization, and memory usage of the selected
pod. For more information, see View Grafana dashboards.