×
Community Blog Applications Architecture on Kubernetes

Applications Architecture on Kubernetes

This article outlines the best practices for managing IT infrastructure with Kubernetes.

By Alex, Alibaba Cloud Community Blog author

Our previous tutorial was a walkthrough about Kubernetes configurations. In this article, we will focus more on the best practices for running scalable, portable and robust applications on Kubernetes.

The emphasis on application architecture is underlined by its influence on system complexity as well as its interactions with the machine environment and components. Following certain good practices during the design, makes it easier to scale applications with ease and avoid common problems associated with the distributed systems. Notably, the environment in which an application runs greatly influences scaling even where standard development methodologies have been employed. Kubernetes specifically is very helpful when packaging software for distributed computers. This article elaborates on how to leverage Kubernetes to build flexible, responsive and easy to handle applications.

Designing for Scalability

Architecture design could be completely bespoke depending on the requirements of the software. The ability to scale horizontally remains one of the most important deciding factors on Kubernetes. Unlike vertical scrolling that uses minimal resources for deployment, you need to maintain several copies of your application to achieve your objective.

The use of microservices on Kubernetes helps create compact modules linked with REST APIs. As compared to internal program mechanisms, it is a better approach for scalability. It is suggested to use discrete components that allow less complex functionality scaling. Moreover, there are some other considerations to take into account while building cloud-native applications including resilience, administration, observability and environment adaption into the software. Scalable cloud applications perform well with regular restarts, have minimal failures, and do not corrupt data. This tutorial explores 6 principles for designing highly scalable applications on Kubernetes.

Scaling Using Microservices

Kubernetes runs applications as microservices in the form of containers that are grouped into pods. The pods carry stateless or stateful containers depending on the needs of applications. With Kubernetes replication, pods can be auto-scaled horizontally to improve access to resources. Alibaba Cloud Kubernetes service supports auto scaling to ensure that the pods are available at all the nodes within your cluster. You also have the flexibility to scale services by adding new nodes to your cluster.

While containers are flexible, there are some encapsulation techniques that work better on Kubernetes. Image building is by far the most important consideration. Images should be small in size and composable (as they are resource-efficient). Make sure to eliminate the build steps from the production image.

Achieve that through Docker multi-stage builds that separate build processes and runtime environments. Specify optimal build processes and runtime images separately on multi-stage build configurations. With this approach, try building production images on a minimal parental image. Further choose scratch for your images, which is the most minimal base available on Docker.

However, the latter is slightly risky as it does not access all core Linux tools. As an alternative, use the Alpine Linux alpine image which offers better Linux compatibility. The Docker Hub would still contain optimized images for interpreted languages such as Python and Ruby. Please note that Kubernetes is able to pull smaller images to nodes faster.

Applications leveraging microservices are easily scalable on Kubernetes. However, they can leverage the scaling powers of Kubernetes if they are containerized and deployed in pods. Notably, StatefulSets helps scale up Cassandra clusters and MongoDB replica sets so that stateless workloads and persistent stateful databases coexist in the same infrastructure.

Ensuring High Availability

Application architecture affects the availability of applications to a great extent. Clusters are especially prone to failures both at the infrastructure and application levels. Kubernetes is a very powerful availability solution for both applications and architecture. A Kubernetes cluster is composed of etcd, API server, and nodes. These components benefit from high availability configurations. It also supports Load balancers and health checks. The use of replica sets, replication controllers and StatefulSets.

Kubernetes replica sets maintain a minimum number of active pods in the cluster and instantly deploy pods if any one of them crashes for any reason. StatefulSets, on the other hand, ensures high availability of stateful applications such as databases.

Infrastructure availability is also one of the important parts of the Kubernetes configuration supporting network file system (NFS) and GlusterFS for a distributed file system, and Flocker for storage container, among others. Such plugins guarantee the availability of stateful workloads on Kubernetes clusters.

Furthermore, the liveness and readiness probes of Kubernetes manage the component lifecycle of your applications to ensure their health and availability. A Liveness probe determines whether an application is actively running in its container. Kubernetes monitors and interprets that using periodic commands and determines responses. If the test fails, the pod is restarted. Readiness probes, on the other hand, determine a pod's readiness to serve traffic.

Applications may need to initialize before accepting requests, and Kubernetes temporarily suspends new requests if a readiness test fails. Due to this, pods initialize and undergo routine protocols without affecting the health of entire groups. These inbuilt processes and capabilities greatly enhance your application's health without requiring many difficult configurations.

Kubernetes Security

Application security is at the heart of successful IT operations and Kubernetes allows you to factor in security enhancements in the infrastructure. At the API level, transport layer security (TLS) secures access so that users authenticate and access resources safely. The API server creates all service accounts for users so that the Kubernetes API authenticate the users and grant them access to the cluster securely.

Kubernetes includes secret objects to preserve sensitive data such as passwords and tokens and limit the possibility of their leakage. Such data is encrypted and preserved in independent objects within the cluster so that pods have access to it whenever a secret is required to access other data.

The Kubernetes network policy is also helpful in routing traffic to pods within a cluster. It defines communication protocols between pods via networks and limits exposure based on the set policy.

Operating System Independence

As Kubernetes is platform-agnostic and compatible with most containers and cloud platforms, it helps in developing highly portable applications.

Alibaba Cloud provides Kubernetes as a managed service but you could also deploy on mainstream Linux distributions and Windows operating systems. At present, container runtimes are based on Docker or RKT, however, the platform is flexible enough to accommodate future runtimes. Furthermore, cluster federation enables clusters deployed in multiple clouds and locations to cooperate in managing workloads.

Application Layer Access

Kubernetes services tackle the challenge associated with routing traffic to pods. While deployments enable scalability, it has multiple challenges as the pods update, restart or move. Modifications change the network addresses of active pods making routing difficult. Kubernetes services maintain routing information for dynamic pods and determine access to infrastructure layers. They control the routing of traffic to a set of pods and maintain the required information to connect relevant pods with changing environments and landscapes.

Internal Services Access

To begin, determine the containers required to use services efficiently. For instance, a stable IP helps applications within a cluster connect pods within the cluster using cluster IP. Communication within the group is routed to the service IP address. While that service works well for internal application layers, a DNS add-on is also possible. Kubernetes DNS makes it possible to use object names to communicate without first determining the service IP.

Public Services

A load balancer service type is the best solution to publicly expose services. A publicly exposed IP address receives traffic through a provisioned load balancer from a cloud provider. Service pods accessed in this manner receive external requests in a controlled network channel. However, each service requires a load balancer, making it potentially expensive to run on Kubernetes. The implementation of Kubernetes ingress objects alleviates the problem associated with load balancers particularly the routing of requests to services.

An ingress controller interprets the sets of rules governing the routing.

12-Factor Philosophy

The 12-factor philosophy is useful in designing cloud-like applications. When applied to Kubernetes, it helps design resilient and effective microservices. Below are a few vital considerations:

1) Codebase: Manage software versions centrally using Git, Mercurial or similar systems.
2) Dependencies: Ensure that your codebase manages all dependencies explicitly.
3) Config: Make sure that your applications have separate configuration parameters.
4) Backing Services: Abstract local and remote services as part of the network configuration parameters.
5) Build, Release, Run: Separate application development, release, and production.
6) Processes: Make sure that applications do not store their state locally, rather the state should be handled by a backing service as in 4 above.
7) Port Binding: Bind your apps to a port to listen to connections while implementing routing externally.
8) Concurrency: Apply the process model of scalability to run multiple copies across servers concurrently.
9) Disposability: Make sure your applications have the potential to start efficiently and end gracefully.
10) Dev/prod Parity: Match your development and production environments to avoid incompatibilities.
11) Logs: Stream your application logs to standardize accessibility output to external services.
12) Admin Processes: Make sure shipping one-off admin processes with the process code while making a specific release.

Conclusion

It is important to take into account the above practices so that you manage your infrastructure easily and leverage Kubernetes for maximum performance. While it may seem difficult as you begin, there are various benefits of structuring your application to leverage the operations of containers especially when you're handling complex deployments. We have noted that the architecture of applications according to Kubernetes patterns greatly improve performance.

Don't have an Alibaba Cloud account? Sign up for an account and try over 40 products for free worth up to $1200. Get Started with Alibaba Cloud to learn more.

0 0 0
Share on

Alex

53 posts | 8 followers

You may also like

Comments

Alex

53 posts | 8 followers

Related Products