This topic describes the exceptions that may occur when you migrate application configurations from a Container Service for Swarm cluster to a Container Service for Kubernetes (ACK) cluster. It also describes how to fix these exceptions.
Incorrect file version
FATA Version 2.1 of Docker Compose is not supported. Please use version 1, 2 or 3
Cause
The conversion is interrupted because Kompose supports only Compose files of versions 1, 2, and 3, except for versions 2.X.
Solution
Change version: '2.X'
to version: '2'
in the Compose file of the Container Service for Swarm cluster and use Kompose to
convert the file again.
Key parsing errors
-
Error message
ERRO Could not parse config for project source : Unsupported config option for account-db service: 'external'
Cause
The conversion is interrupted because Kompose cannot parse the external key.
Solution
If an exception of the ERRO or FATA severity occurs, delete the configuration that causes the exception from the Swarm Compose file. Then, use Kompose to convert the file again and manually migrate the configuration. For more information, see Application configuration parameters, Application release parameters, Network configuration parameters, and Log configuration parameters.
-
Error message
ERRO Could not parse config for project source : Unsupported config option for gateway service: 'net'
Cause
The conversion is interrupted because Kompose cannot parse the net key.
Solution
Delete the configuration that causes the exception from the Swarm Compose file and use Kompose to convert the file again. Then, manually migrate the configuration. For more information, see Application configuration parameters, Application release parameters, Network configuration parameters, and Log configuration parameters.
Invalid value types
-
Error message
ERRO Could not parse config for project source : Service 'auth-service' configuration key 'labels' contains an invalid type, it should be an array or object Unsupported config option for auth-service service: 'latest_image'
Cause
Kompose cannot convert the latest_image key because its value type is invalid.
Solution
Change the value type from BOOLEAN to STRING in the Swarm Compose file. For example, change true to
'true'
.Note This exception occurs to the following keys:- aliyun.latest_image: true
- aliyun.global: true
-
Error message
ERRO Could not parse config for project source : Cannot unmarshal '30' of type int into a string value
Cause
An invalid value type is detected. Check whether the value of the aliyun.log_* key is 30 in the Compose file. The value must be enclosed in a pair of apostrophes (
''
) because the value must be a string, but not an integer.Solution
Change 30 to
'30'
in the Swarm Compose file and use Kompose to convert the file again.
Unsupported keys
-
Error message
WARN Unsupported hostname key - ignoring
Cause
In the Container Service for Swarm cluster, the hostName key specifies a hostname that can be used as a domain name to access services. However, the Swarm Compose file does not specify the corresponding container port and Kompose cannot automatically create a matching service for the ACK cluster. Therefore, you cannot access applications by using the hostname in the ACK cluster.
Solution
Deploy the Kubernetes resource files and then create a service in the ACK console. When you create the service, use hostName as the service name, and set the service port and container port to the same value.
-
Error message
WARN Unsupported links key - ignoring
Cause
Similar to the hostName key, the links key specifies links names or aliases that can be used to access services. However, the Swarm Compose file does not specify the corresponding container port and Kompose cannot automatically create a matching service for the ACK cluster. Therefore, you cannot access applications by using links names or aliases in the ACK cluster.
Solution
Deploy the Kubernetes resource files and then create services in the ACK console. When you create the services, use the links names or aliases specified by the links key as the service names, and set the service port and container port to the same value.
Defects of key conversion
-
Error message
WARN Handling aliyun routings label: [{rabbitmq.your-cluster-id.alicontainer.com http 15672}]
Cause
In the Container Service for Swarm cluster, a test domain name provided by Container Service for Swarm is configured for simple routing. This test domain name is related to the ID of the Container Service for Swarm cluster. Kompose cannot automatically convert this test domain name to that of an ACK cluster because Kompose does not have the ID of the ACK cluster. You must manually modify the Ingress file and update the domain name.
Solution
Find the *-ingress.yaml file of each Ingress and replace host: your-cluster-id.alicontainer.com with the test domain name of the ACK cluster. To obtain the test domain name, perform the following operations:- Log on to the ACK console. In the left-side navigation pane, choose . On the Clusters page, find the Kubernetes-piggymetrics-cluster cluster and click Manage in the Actions column.
- On the Basic Information tab, the value in the Testing Domain field is *.c7f537c92438f415b943e7c2f8ca30b3b.cn-zhangjiakou.alicontainer.com. Replace the value with .your-cluster-id.alicontainer.com in the Ingress file.
-
Error message
WARN Handling aliyun routings label: [{gateway.your-cluster-id.alicontainer.com http 4000} {gateway.swarm.piggymetrics.com http 4000}]
Cause
In the Container Service for Swarm cluster, multiple domain names are configured for simple routing. However, Kompose can convert only one domain name. You must manually add Ingress rules for other domain names.
Solution
Modify the generated Kubernetes resource files. For more information, see Application configuration parameters, Application release parameters, Network configuration parameters, and Log configuration parameters.
Application deployment failures
-
Error message
error: error validating "logtail2-daemonset.yaml": error validating data: ValidationError(DaemonSet.status): missing required field "numberReady" in io.Kubernetes.api.extensions.v1beta1.DaemonSetStatus; if you choose to ignore these errors, turn validation off with --validate=false
Cause
Kompose automatically converts a service containing the aliyun.global: true key in the Container Service for Swarm cluster to a DaemonSet in the ACK cluster. However, the generated Kubernetes resource files contain the status field that records the intermediate status. This field causes the deployment failure.
status:- currentNumberScheduled: 0
- desiredNumberScheduled: 0
- numberMisscheduled: 0
Solution
Delete the status field from the generated Kubernetes resource file *-daemonset.yaml and then deploy the application again.
-
Error message
error: error parsing auth-service-deployment.yaml: error converting YAML to JSON: yaml: line 26: did not find expected key
Cause
The Kubernetes resource files cannot be parsed because the expected field is not found in the specified line. In most cases, the cause is invalid indentation. Kubernetes processes the current field as a sub-field of the previous field instead of processing it as a parallel field.
Solution
Correct the specified line of the Kubernetes resource files based on the field list and official documentation provided by ACK.
-
Error message
error: error parsing auth-service-deployment.yaml: error converting YAML to JSON: yaml: line 34: found character that cannot start any token
Cause
Characters that are not allowed for a token, such as tab characters, exist in the specified line of the Kubernetes resource files.
Solution
In the generated Kubernetes resource file *-daemonset.yaml, replace tab characters in the specified line with space characters.
Application startup failures
-
Error message
A container continues to restart and the health check fails.Initialized: True Ready: False ContainersReady: False PodScheduled: True
Cause
In ACK clusters, liveness probes and readiness probes are used to check whether containers are alive or ready. These probes are similar to aliyun.probe.url and aliyun.probe.cmd in Swarm. Kompose converts both aliyun.probe.url and aliyun.probe.cmd to liveness probes. In Container Service for Swarm clusters, aliyun.probe.url and aliyun.probe.cmd mark the container status only when an issue occurs. However, in ACK clusters, when a liveness probe detects an exception, it automatically stops container initialization and restarts the container.
Solution- Check whether the exception is caused by probe configurations.
Delete liveness probes or readiness probes and check whether an application can start. If the application starts, it indicates that the probe configuration causes this issue.
- Modify the probe configuration.
Check the amount of time that is required to start the container. Then, adjust the settings of the liveness probe. Pay attention to the settings of the initialDelaySeconds, periodSeconds, and timeoutSeconds fields. For more information, see Use an image to create a stateless application.
- Check whether the exception is caused by probe configurations.
-
Error message
A container continues to restart and the health check fails.Initialized: True Ready: False ContainersReady: False PodScheduled: True
The following shows the container log:2019-05-22 06:42:09.245 INFO [gateway,,,] 1 --- [ main] c.c.c.ConfigServicePropertySourceLocator : Connect Timeout Exception on Url - http://config:8888. Will be trying the next url if available
Cause
The request to
config:8888
timed out because the network mode of the pod is invalid. The network mode is set tohostNetwork: true
. Therefore, the Elastic Compute Service (ECS) instances cannot parse the name of the Kubernetes service.After you modify the configuration in the gateway-deployment.yaml file and run the kubectl apply command to redeploy the file, the issue may still exist.
This occurs because the modified *-deployment.yaml file does not take effect.
Solution
To make sure that the modified file takes effect, log on to the ACK console, delete the corresponding application, and use kubectl to deploy the application again.