edit-icon download-icon

Application parameter configurations

Last Updated: May 08, 2018

This document aims to help you understand what the parameters on the page mean when you create a swarm application by using an image. Then, you can configure the parameters smoothly. For some parameters, some documents are provided for your reference.

Image Name

  • Select an existing image in the image list.

  • Enter the image address directly. Take the image address of a Docker Hub user’s WordPress container as an example. docker.io/namespace/wordpress:tag is a complete image address composed of domain name, namespace, image name, and label.

Image Version

You can specify the image version, namely, the image tag, after selecting an existing image in the image list. If not specified, the latest version is used by default.

Scale

Configure the number of containers. Multiple containers can enhance application availability efficiently.

Network Mode

Select Default or host.

  • Default: Namely, the bridge network mode. By connecting to the default bridge docker0, this mode assigns an independent network namespace for each container. You can see that eth0 is created in the container by using this configuration.

  • Host: The network stack information that allows containers to use host. Containers created in this mode can view all the network devices on the host and have the full access permission to these devices.

For more information about Docker container network, see Docker container networking.

Restart

Specify the restart policy for the container. For more information, see restart.

  • With this check box cleared, the system does not restart the container under any circumstances.

  • With this check box selected, the system tries to restart the container until the specified container is running normally.

Command

Configure the command that is run by default after the container is started and configure the corresponding parameters. We recommend that you use the Exec format. The command is run if the container is started and docker run does not specify other commands. For more information, see command.

The default command specified by command is ignored if docker run specifies other commands.

Command has three formats:

  • Exec format: CMD ["executable","param1","param2"]. This is the recommended format of command.

  • CMD ["param1","param2"]. Use together with entrypoint command that is in the Exec format to provide the additional parameters.

  • Shell format: CMD command param1 param2.

Entrypoint

The execution command to start containers. Entrypoint command can run the containers in the form of applications or services.

Entrypoint seems similar to CMD, both of which can specify the command to be run and the corresponding parameters. The difference is that entrypoint is not ignored and must be run, even if other commands are specified when running docker run.

Entrypoint has two formats:

  • Exec format: ENTRYPOINT ["executable", "param1", "param2"]. This is the recommended format of entrypoint.
  • Shell format: ENTRYPOINT command param1 param2.

CPU Limit and Memory Limit

CPU 100 indicates one core and the unit of memory is the MB. You can configure the CPU limit and memory limit for a container, which facilitates your resource planning. The corresponding compose labels are cpu_shares and mem_limit. For more information, see Restrict container resources.

Capabilities

By default, the root permission in Docker containers has strict limits. With the Linux kernel capabilities, related permissions can be granted to the containers. For the parameters used to grant permissions to the containers, see Runtime privilege and Linux capabilities.

The related parameter commands are as follows:

  • ADD field: Corresponds to the parameter –cap-add: Add Linux capabilities. Enter the Capability Key that containers can add in this field to add the permission to containers.
  • DROP field: Corresponds to the parameter –cap-drop: Drop Linux capabilities. Enter the Capability Key that containers already have by default in this field to delete this permission from containers.

Container Config

Select the stdin check box to enable standard input for containers. Select the tty check box to assign an virtual terminal to send signals to the containers. These two options are usually used together, which indicates to bind the terminal (tty) to the container standard input (stdin). For example, an interactive program obtains standard input from you and then displays the obtained standard input in the terminal.

Port Mapping

Specify the port mapping between host and container, and select TCP or UDP as the protocol. Port mapping is used for the routing between container and host and implements the access to the container from outside.

Port mapping is the prerequisite for the configurations of simple routing and Server Load Balancer routing. The container provides external services by using the configured port mapping.

Web Routing

After a cluster is created in Container Service, the acsrouting application, including the routing service (namely, routing), is created automatically to provide the simple routing function. Each node has a service deployed. In a node, the acsrouting_routing_index container implements the routing forward in the cluster to route the HTTP services or HTTPS services. For more information, see Simple routing - Supports HTTP and HTTPS.

Note: When exposing HTTP/HTTPS services, the specific host port can be unconfigured. The container port can be accessed directly by using the overlay network or Virtual Private Cloud (VPC).

Load Balancer

Control the routing access path on your own when configuring this parameter, including the routing mapping of Server Load Balancer frontend port > backend host port > container port.

Configure the port mapping in advance. Then, configure the mapping of container_port and $scheme://$[slb_name|slb_id]:$slb_front_port. For more information about the Server Load Balancer labels, see lb.

Data Volume

We recommend that you use data volumes to store the persistent data generated by containers, which is more secure, and easier to manage, back up, and migrate. For more information, see Use volumes.

  • Select to create a data volume. Enter the host path or data volume name, the container path, and select RW or RO as the data volume permission.

  • Enter the name and permission parameters of another service or container in the volumes_from field. For example, service_name:ro. If the access permission is not specified, the default permission is RW. For more information, see volume compose. After the configuration, containers can be granted to use the data volumes of another service or container.

Environment

Environment variables support the input form of key-value pairs and the formats such as array, dictionary, and boolean. For more information, see environment-variables.

You can configure the relevant environment variables for Docker containers. Environment variables can be used as flags and represent some parameters of environment deployment. You can also use environment variables to pass configurations and build automated deployment scripts.

Labels

Label applies metadata to Docker objects and can be used to build images, record license information, and describe the relationship among containers, data volumes, and network to implement powerful features.

Labels support the input form of key-value pairs and are stored in the format of strings. You can specify multiple labels for containers. The Docker native labels and Alibaba Cloud container extension labels are supported.

Smooth Upgrade

Select whether or not to enable the smooth upgrade. Enabling smooth upgrade is equivalent to adding the label rolling_update=true. Use together with the label probe to make sure the containers can be updated successfully. For more information, see probe and rolling_updates.

Across Multiple Zones

Select Ensure or Try best.

Select Ensure to deploy containers in two different zones. With this option selected, the container creation fails if the current cluster does not have two zones or the containers cannot be distributed in two zones because of limited machine resources.

You can also select Try best. Then, Container Service tries to deploy the containers in two different zones. Container can still be created even if the condition cannot be met.

If this parameter is not configured, Container Service deploys the containers in one zone by default. For more information, see High availability scheduling.

Auto Scaling

To meet the demands of applications under different loads, Container Service supports auto scaling for the service, which automatically adjusts the number of containers according to the container resource usage in the service.

For more information, see Container auto scaling.

Thank you! We've received your feedback.