This topic describes the exceptions that may occur when you migrate application configurations and how to handle them.

Incorrect file version

Error message
FATA Version 2.1 of Docker Compose is not supported. Please use version 1, 2 or 3

Cause

The conversion is interrupted because Kompose only supports Docker Compose files of versions 1, 2, and 3, but not versions 2.X.

Handling method

Change version: '2.X' to version: '2' in the Swarm compose file and use Kompose to convert the file again.

Failed to parse keys

  • Error message
    ERRO Could not parse config for project source : Unsupported config option for account-db service: 'external'

    Cause

    The conversion is interrupted because Kompose cannot parse the external key.

    Handling method

    If an exception of the ERRO or FATA severity occurred, delete the configuration that causes the exception from the Swarm Compose file and use Kompose to convert the file again. Later, manually migrate the configuration.

  • Error message
    ERRO Could not parse config for project source : Unsupported config option for gateway service: 'net'

    Cause

    The conversion is interrupted because Kompose cannot parse the net key.

    Handling method

    Delete the configuration that causes the exception from the Swarm Compose file and use Kompose to convert the file again. Later, manually migrate the configuration.

Invalid value types

  • Error message
    ERRO Could not parse config for project source : Service 'auth-service' configuration key 'labels' contains an invalid type, it should be an array or object Unsupported config option for auth-service service: 'latest_image'

    Cause

    Kompose cannot convert the latest_image key because the type of its value is invalid.

    Handling method

    Change the value from the Boolean type to the string type in the Swarm Compose file. For example, change true to 'true'.

    Note This exception occurs for the following keys:
    • aliyun.latest_image: true
    • aliyun.global: true
  • Error message
    ERRO Could not parse config for project source : Cannot unmarshal '30' of type int into a string value

    Cause

    An invalid value type is detected. Check whether the value of the aliyun.log_* key is 30. The value must be enclosed in single quotation marks (') because it must be a string, not an integer.

    Handling method

    Change 30 to '30' in the Swarm Compose file and use Kompose to convert the file again.

Unsupported keys

  • Error message
    WARN Unsupported hostname key - ignoring

    Cause

    In the Swarm cluster, the hostname key specifies a hostname that can be used as the domain name for accessing a service. However, the Swarm Compose file does not specify the container port, so Kompose cannot automatically convert the hostname key to a matching service for the Kubernetes cluster. As a result, you cannot access the corresponding application through the hostname in the Kubernetes cluster.

    Handling method

    Deploy the Kubernetes resource files first and then create a service in the Container Service - Kubernetes console. When you create the service, use the hostname as the service name and the container port as the service port.

  • Error message
    WARN Unsupported links key - ignoring

    Cause

    Similar to the hostname key, the links key specifies names or aliases for accessing services. However, the Swarm Compose file does not specify the container port, so Kompose cannot automatically convert the links key to matching services for the Kubernetes cluster. As a result, you cannot access the corresponding applications through the names or aliases specified by the links key in the Kubernetes cluster.

    Handling method

    Deploy the Kubernetes resource files first and then create services in the Container Service - Kubernetes console. When you create the services, use the names or aliases specified by the links key as the service names and the container ports as the service ports.

Defects of key conversion

  • Error message
    WARN Handling aliyun routings label: [{rabbitmq.your-cluster-id.alicontainer.com  http 15672}]

    Cause

    In the Swarm cluster, a test domain name provided by Container Service for Swarm is configured for simple routing. This test domain name includes the ID of the Swarm cluster. Kompose cannot automatically convert this test domain name to one for the Kubernetes cluster because Kompose does not know the ID of the Kubernetes cluster. You must manually update the domain name.

    Handling method

    Find the *-ingress.yaml file of each ingress and replace host: your-cluster-id.alicontainer.com in this file with the test domain name of the Kubernetes cluster. To obtain the test domain name of the Kubernetes cluster, follow these steps:
    1. Log on to the Container Service - Kubernetes console. In the left-side navigation pane, choose Cluster > Cluster. On the Clusters page, find the target cluster Kubernetes-piggymetrics-cluster and click Manage in the Actions column.
    2. On the Basic Information page, obtain the value of Testing Domain, for example, *.c7f537c92438f415b943e7c2f8ca30b3b.cn-zhangjiakou.alicontainer.com. Replace .your-cluster-id.alicontainer.com in the Kubernetes resource file of each ingress with this value.
  • Error message
    WARN Handling aliyun routings label: [{gateway.your-cluster-id.alicontainer.com  http 4000} {gateway.swarm.piggymetrics.com  http 4000}]

    Cause

    In the Swarm cluster, multiple domain names are configured for simple routing. However, Kompose can convert only the first domain name. You must manually add rules for other domain names.

    Handling method

    Modify the generated Kubernetes resource files.

Application deployment failures

  • Error message
    error: error validating "logtail2-daemonset.yaml": error validating data: ValidationError(DaemonSet.status): missing required field "numberReady" in io.Kubernetes.api.extensions.v1beta1.DaemonSetStatus; if you choose to ignore these errors, turn validation off with --validate=false

    Cause

    Kompose automatically converts a service containing the aliyun.global: true key in the Swarm cluster to a DaemonSet in the Kubernetes cluster. However, the generated Kubernetes resource file contains the status field that records the intermediate status. This field causes the deployment failure.

    status:
    • currentNumberScheduled: 0
    • desiredNumberScheduled: 0
    • numberMisscheduled: 0

    Handling method

    Delete the status field from the generated Kubernetes resource file *-daemonset.yaml and then deploy the application.

  • Error message
    error: error parsing auth-service-deployment.yaml: error converting YAML to JSON: yaml: line 26: did not find expected key

    Cause

    The Kubernetes resource file fails to be parsed because the expected field is not found in the specified line. The most common cause is incorrect indentation, so Kubernetes processes the specific field as a sub-field to the previous one, rather than a parallel field.

    Handling method

    Adjust the indentation in the specified line according to the field list and Kubernetes official documents.

  • Error message
    error: error parsing auth-service-deployment.yaml: error converting YAML to JSON: yaml: line 34: found character that cannot start any token

    Cause

    Characters not allowed for a token, typically a tab character, exist in the specified line of the Kubernetes resource file.

    Handling method

    In the generated Kubernetes resource file *-daemonset.yaml, replace the unsupported characters, such as the tab character, with space characters.

Application startup failures

  • Error message

    A pod kept on restarting, and the health check failed.
    Initialized: True
    Ready: False
    ContainersReady: False
    PodScheduled: True

    Cause

    Kubernetes uses liveness and readiness probes to check whether a pod is alive and ready. They are similar to aliyun.probe.url and aliyun.probe.cmd in Swarm. Kompose converts both aliyun.probe.url and aliyun.probe.cmd to liveness probes in the Kubernetes cluster. In Swarm, aliyun.probe.url and aliyun.probe.cmd only mark the container status when detecting any issues but do not restart containers. However, when a liveness probe in Kubernetes detects any issues in a pod, the liveness probe automatically stops pod initialization and restarts the pod. If a liveness probe is improperly configured, the pod keeps on restarting.

    Handling method
    1. Check whether the exception is caused by a probe.

      Delete the configuration of the liveness probe or readiness probe and check whether the application can start. If the application starts, the probe configuration is incorrect.

    2. Correct the probe configuration.

      Obtain the actual time required to start the pod. Then, adjust the settings of the liveness probe in the Kubernetes resource file, especially the initialDelaySeconds, restartPolicy, and timeoutSeconds fields. For more information, see Create deployments by using images.

  • Error message

    A pod kept on restarting, and the health check failed.
    Initialized: True
    Ready: False
    ContainersReady: False
    PodScheduled: True
    The following log was generated for the pod:
    2019-05-22 06:42:09.245  INFO [gateway,,,] 1 --- [           main] c.c.c.ConfigServicePropertySourceLocator : Connect Timeout Exception on Url - http://config:8888. Will be trying the next url if available

    Cause

    The request to the URL http://config:8888 timed out because the network mode of the pod is incorrect. The network mode is set to hostNetwork: true. The name of the Kubernetes service cannot be parsed through the network of the Elastic Compute Service (ECS) instance. Therefore, this network mode is incorrect.

    After the configuration was corrected in the gateway-deployment.yaml file and the file was redeployed through the kubectl apply command, this error still occurred.

    Handling method

    Sometime, a modified *-deployment.yaml file may fail to take effect.

    To make sure that the modified file takes effect, log on to the Container Service - Kubernetes console, delete the application, and use kubectl to deploy the Kubernetes resource file again.