All Products
Search
Document Center

Alibaba Cloud Model Studio:OpenClaw

Last Updated:Apr 01, 2026

Integrate OpenClaw with Alibaba Cloud Coding Plan. OpenClaw (formerly Moltbot/Clawdbot) is an open-source personal AI assistant platform that supports interaction with AI through multiple messaging channels.

Install OpenClaw

Manual installation

  1. Install or update Node.js

    1. Check your current version: Run the following command in your terminal to view your Node.js version (must be v22.0 or higher). If you see “command not found,” Node.js is not installed. If the version shown is lower than v22.0, you must update it.

      To open the terminal: On macOS, press Command + Space → type Terminal → press Enter. On Windows, press the Win key → type Terminal/PowerShell/cmd → press Enter.
      node -v
    2. Download and install: Visit Node.js, select the “LTS” version with version >= 22.x.x, and download the installer for your operating system. Install after downloading.

      For example: On Windows, download Windows Installer (.msi). On macOS, download macOS Installer (.pkg).
  2. Run the following command to start installing OpenClaw.

    1. macOS/Linux:

      Press Command + Space, type Terminal, and press Enter. Then run the following command:

      curl -fsSL https://openclaw.ai/install.sh | bash
    2. Windows:

      In the taskbar search box, type PowerShell, choose Run as administrator, and run the following command in PowerShell:

      iwr -useb https://openclaw.ai/install.ps1 | iex
  3. After installation completes, a prompt appears automatically. Follow the prompts to finish configuring OpenClaw. Refer to the sample configuration below:

    Configuration item

    Configuration Content

    I understand this is powerful and inherently risky. Continue?

    Select “Yes”

    Onboarding mode

    Select “QuickStart”

    Model/auth provider

    Select “Skip for now”; you can configure it later

    Filter models by provider

    Select “All providers”

    Default model

    Use default configuration

    Select channel (QuickStart)

    Select “Skip for now”; you can configure it later

    Configure skills now? (recommended)

    Select “No”; you can configure it later.

    Enable hooks?

    Press the space bar to select “Skip for now,” then press Enter to proceed.

    How do you want to hatch your bot?

    Select “Hatch in TUI.”

Qwen Code guided installation

OpenClaw installation depends on a Node.js environment. Manual installation may encounter environment configuration issues. You can use Qwen Code to complete installation and verification.

  1. Install and configure Qwen Code.

  2. Run the following command in your terminal to start Qwen Code.

    qwen
  3. Enter the following instructions in the Qwen Code dialog box.

    1. macOS/Linux:

        Help me install OpenClaw by running the following commands in order:
      
        1. Prerequisite: Install Node.js (v22.0 or higher). Check with node --version. If already installed but below v22.0, upgrade without uninstalling the existing version.
        2. curl -fsSL https://openclaw.ai/install.sh | bash -s -- --no-onboard
        3. openclaw gateway install
        4. openclaw onboard --non-interactive --accept-risk --flow quickstart --auth-choice skip --skip-channels --skip-skills
        5. Run openclaw status to confirm OpenClaw is running normally
    2. Windows:

      Help me install OpenClaw on Windows by following these steps:
      
      ## Execution instructions
      
      Run all PowerShell commands using this format:
      ```
      powershell -ExecutionPolicy Bypass -Command "<command>"
      ```
      
      ### Notes:
      1. Use the `write_file` tool to create multi-line files instead of here-string syntax.
      2. After modifying environment variables, explicitly refresh `$env:Path` to use them in the same session.
      3. Set a longer timeout (≥120000 ms) for network download commands.
      
      ---
      
      ## Step 1: Check prerequisites
      
      Verify these tools are installed and output their versions:
      - `node --version` (requires v22 or higher)
      - `npm --version`
      - `git --version`
      
      If all are installed and Node.js ≥ v22, skip to Step 4.
      
      ---
      
      ## Step 2: Install Node.js (if missing or below v22)
      
      1. Detect system architecture (x64 / x86 / ARM64).
      2. Download and extract Node.js zip from the official source:
      ```
      powershell -ExecutionPolicy Bypass -Command "$ProgressPreference = 'SilentlyContinue'; Invoke-WebRequest -Uri 'https://nodejs.org/dist/v24.0.0/node-v24.0.0-win-x64.zip' -OutFile \"$env:TEMP\node24.zip\"; Expand-Archive \"$env:TEMP\node24.zip\" -DestinationPath \"$env:LOCALAPPDATA\nodejs-v24\" -Force; Remove-Item \"$env:TEMP\node24.zip\""
      ```
      3. Add to system PATH (permanent; new terminals will recognize it):
      ```
      powershell -ExecutionPolicy Bypass -Command "$nodePath = \"$env:LOCALAPPDATA\nodejs-v24\node-v24.0.0-win-x64\"; $machinePath = [Environment]::GetEnvironmentVariable('PATH', 'Machine'); [Environment]::SetEnvironmentVariable('PATH', \"$nodePath;$machinePath\", 'Machine'); $env:Path = \"$nodePath;$env:Path\"; node --version; npm --version"
      ```
      
      ---
      
      ## Step 3: Install Git (if missing)
      
      1. Download and silently install Git from the official source:
      ```
      powershell -ExecutionPolicy Bypass -Command "$ProgressPreference = 'SilentlyContinue'; Invoke-WebRequest -Uri 'https://github.com/git-for-windows/git/releases/download/v2.53.0.windows.2/Git-2.53.0.2-64-bit.exe' -OutFile \"$env:TEMP\Git-Installer.exe\"; Start-Process -FilePath \"$env:TEMP\Git-Installer.exe\" -ArgumentList '/VERYSILENT','/NORESTART','/NOCANCEL','/SP-','/CLOSEAPPLICATIONS','/RESTARTAPPLICATIONS','/COMPONENTS=icons,ext\reg\shellhere,assoc,assoc_sh' -Wait; Remove-Item \"$env:TEMP\Git-Installer.exe\""
      ```
      2. Refresh PATH and verify installation:
      ```
      powershell -ExecutionPolicy Bypass -Command "$machinePath = [Environment]::GetEnvironmentVariable('PATH', 'Machine'); $env:Path = \"$machinePath;$env:Path\"; git --version"
      ```
      
      ---
      
      ## Step 4: Install OpenClaw
      
      Refresh PATH and install globally:
      ```
      powershell -ExecutionPolicy Bypass -Command "$machinePath = [Environment]::GetEnvironmentVariable('PATH', 'Machine'); $userPath = [Environment]::GetEnvironmentVariable('PATH', 'User'); $env:Path = \"$machinePath;$userPath;$env:Path\"; npm install -g openclaw@latest"
      ```
      
      ---
      
      ## Step 5: Verify installation
      
      ```
      powershell -ExecutionPolicy Bypass -Command "$machinePath = [Environment]::GetEnvironmentVariable('PATH', 'Machine'); $userPath = [Environment]::GetEnvironmentVariable('PATH', 'User'); $env:Path = \"$machinePath;$userPath;$env:Path\"; openclaw --version"
      ```
      
      ---
      
      ## Step 6: Install Gateway
      
      ```
      openclaw gateway install
      ```
      
      ---
      
      ## Step 7: Auto-complete initial configuration
      
      Use the `write_file` tool to create a config file for QuickStart mode:
      
      Config file path: `%USERPROFILE%\.openclaw\config.yaml`
      
      Config file content:
      ```yaml
      # OpenClaw Configuration - QuickStart mode
      
      workspace:
        name: default
        directory: .
      
      gateway:
        mode: local
        auth:
          token: openclaw-quickstart-token
      
      session:
        scope: personal
        dmScope: per-channel
      
      channels:
        - type: tui
          enabled: true
      
      skills:
        enabled: false
      
      hooks:
        enabled: false
      
      security:
        acknowledged: true
        mode: personal
      
      ui:
        hatch: tui
      ```
      
      After creating the config file, run:
      ```
      powershell -ExecutionPolicy Bypass -Command "[Environment]::SetEnvironmentVariable('OPENCLAW_GATEWAY_TOKEN', 'openclaw-quickstart-token', 'User')"
      ```
      
      ---
      
      ## Step 8: Start and use
      
      ```
      # Launch TUI interface
      openclaw tui
      
      # Or check status
      openclaw status
      
      # View Dashboard (visit in browser)
      # http://127.0.0.1:18789/
      ```
  4. Grant permission for Qwen Code to execute commands until installation completes.

  5. Type /exit to exit Qwen Code.

    /exit

Configure Coding Plan in OpenClaw

  • If OpenClaw is deployed on Simple Application Server, follow Method 2 to configure using the graphical interface.

  • If OpenClaw is deployed locally or on Elastic Computing Service (ECS), follow Method 1 using an AI agent (such as Qwen Code) for guided configuration. If you are familiar with OpenClaw configuration, you can also use Method 3 to directly edit the configuration file.

Method 1: Qwen Code guided configuration

  1. Install and configure Qwen Code.

  2. Run the following command in your terminal to start Qwen Code.

    qwen
  3. Enter the following instruction in the Qwen Code dialog box.

    Help me configure OpenClaw to connect to Coding Plan by following these steps:
    
    ## Step 1: Get API Key
    First ask the user: "Please provide your Coding Plan API Key."
    Wait for the user's reply before proceeding.
    
    ## Step 2: Modify configuration file
    1. Open the configuration file: ~/.openclaw/openclaw.json
       - Create the file if it does not exist
       - Important: Must use .json format, not other formats
    
    2. Locate or create the following fields and merge the configuration (keep existing settings unchanged; add new fields if missing):
       - Use "mode": "merge" to avoid overwriting existing configuration
       - Replace YOUR_API_KEY with the actual API Key provided by the user
    {
      "models": {
        "mode": "merge",
        "providers": {
          "bailian": {
            "baseUrl": "https://coding-intl.dashscope.aliyuncs.com/v1",
            "apiKey": "YOUR_API_KEY",
            "api": "openai-completions",
            "models": [
              {
                "id": "qwen3.5-plus",
                "name": "qwen3.5-plus",
                "reasoning": false,
                "input": ["text", "image"],
                "cost": { "input": 0, "output": 0, "cacheRead": 0, "cacheWrite": 0 },
                "contextWindow": 1000000,
                "maxTokens": 65536,
                "compat": {
                  "thinkingFormat": "qwen"
                }
              },
              {
                "id": "qwen3-max-2026-01-23",
                "name": "qwen3-max-2026-01-23",
                "reasoning": false,
                "input": ["text"],
                "cost": { "input": 0, "output": 0, "cacheRead": 0, "cacheWrite": 0 },
                "contextWindow": 262144,
                "maxTokens": 65536,
                "compat": {
                  "thinkingFormat": "qwen"
                }
              },
              {
                "id": "qwen3-coder-next",
                "name": "qwen3-coder-next",
                "reasoning": false,
                "input": ["text"],
                "cost": { "input": 0, "output": 0, "cacheRead": 0, "cacheWrite": 0 },
                "contextWindow": 262144,
                "maxTokens": 65536
              },
              {
                "id": "qwen3-coder-plus",
                "name": "qwen3-coder-plus",
                "reasoning": false,
                "input": ["text"],
                "cost": { "input": 0, "output": 0, "cacheRead": 0, "cacheWrite": 0 },
                "contextWindow": 1000000,
                "maxTokens": 65536
              },
              {
                "id": "MiniMax-M2.5",
                "name": "MiniMax-M2.5",
                "reasoning": false,
                "input": ["text"],
                "cost": { "input": 0, "output": 0, "cacheRead": 0, "cacheWrite": 0 },
                "contextWindow": 196608,
                "maxTokens": 32768
              },
              {
                "id": "glm-5",
                "name": "glm-5",
                "reasoning": false,
                "input": ["text"],
                "cost": { "input": 0, "output": 0, "cacheRead": 0, "cacheWrite": 0 },
                "contextWindow": 202752,
                "maxTokens": 16384,
                "compat": {
                  "thinkingFormat": "qwen"
                }
              },
              {
                "id": "glm-4.7",
                "name": "glm-4.7",
                "reasoning": false,
                "input": ["text"],
                "cost": { "input": 0, "output": 0, "cacheRead": 0, "cacheWrite": 0 },
                "contextWindow": 202752,
                "maxTokens": 16384,
                "compat": {
                  "thinkingFormat": "qwen"
                }
              },
              {
                "id": "kimi-k2.5",
                "name": "kimi-k2.5",
                "reasoning": false,
                "input": ["text", "image"],
                "cost": { "input": 0, "output": 0, "cacheRead": 0, "cacheWrite": 0 },
                "contextWindow": 262144,
                "maxTokens": 32768,
                "compat": {
                  "thinkingFormat": "qwen"
                }
              }
            ]
          }
        }
      },
      "agents": {
        "defaults": {
          "model": {
            "primary": "bailian/qwen3.5-plus"
          },
          "models": {
            "bailian/qwen3.5-plus": {},
            "bailian/qwen3-max-2026-01-23": {},
            "bailian/qwen3-coder-next": {},
            "bailian/qwen3-coder-plus": {},
            "bailian/MiniMax-M2.5": {},
            "bailian/glm-5": {},
            "bailian/glm-4.7": {},
            "bailian/kimi-k2.5": {}
          }
        }
      },
      "gateway": {
        "mode": "local"
      }
    } 
    3. Save the file
    
    ## Step 3: Restart and verify
    1. Run `openclaw gateway restart` to apply the configuration
    2. Run `openclaw models list` to verify success
       - Check that models starting with `bailian/` appear in the output
       - Check that each model has a `configured` tag
       - Fix any errors based on error messages
  4. Grant permission for Qwen Code to execute commands until configuration completes.

  5. After configuration completes, Qwen Code outputs the result of openclaw models list. If models such as bailian/qwen3.5-plus are marked as configured, the setup succeeded.

Method 2: Graphical interface configuration

If you deployed OpenClaw using the Simple Application Server deployment solution, configure Coding Plan through the product’s graphical interface. For details, see Simple server configuration method.

Method 3: Direct configuration file modification

Modify configuration file via terminal

  1. Run the following command in your terminal to open the configuration file.

    nano ~/.openclaw/openclaw.json
  2. Initial configuration: Copy the following content into the configuration file. Replace YOUR_API_KEY with your Coding Plan-specific API Key.

    Existing configuration: To preserve existing settings, do not replace the entire file. See How to safely modify an existing configuration?

    {
      "models": {
        "mode": "merge",
        "providers": {
          "bailian": {
            "baseUrl": "https://coding-intl.dashscope.aliyuncs.com/v1",
            "apiKey": "YOUR_API_KEY",
            "api": "openai-completions",
            "models": [
              {
                "id": "qwen3.5-plus",
                "name": "qwen3.5-plus",
                "reasoning": false,
                "input": ["text", "image"],
                "cost": { "input": 0, "output": 0, "cacheRead": 0, "cacheWrite": 0 },
                "contextWindow": 1000000,
                "maxTokens": 65536,
                "compat": {
                  "thinkingFormat": "qwen"
                }
              },
              {
                "id": "qwen3-max-2026-01-23",
                "name": "qwen3-max-2026-01-23",
                "reasoning": false,
                "input": ["text"],
                "cost": { "input": 0, "output": 0, "cacheRead": 0, "cacheWrite": 0 },
                "contextWindow": 262144,
                "maxTokens": 65536,
                "compat": {
                  "thinkingFormat": "qwen"
                }
              },
              {
                "id": "qwen3-coder-next",
                "name": "qwen3-coder-next",
                "reasoning": false,
                "input": ["text"],
                "cost": { "input": 0, "output": 0, "cacheRead": 0, "cacheWrite": 0 },
                "contextWindow": 262144,
                "maxTokens": 65536
              },
              {
                "id": "qwen3-coder-plus",
                "name": "qwen3-coder-plus",
                "reasoning": false,
                "input": ["text"],
                "cost": { "input": 0, "output": 0, "cacheRead": 0, "cacheWrite": 0 },
                "contextWindow": 1000000,
                "maxTokens": 65536
              },
              {
                "id": "MiniMax-M2.5",
                "name": "MiniMax-M2.5",
                "reasoning": false,
                "input": ["text"],
                "cost": { "input": 0, "output": 0, "cacheRead": 0, "cacheWrite": 0 },
                "contextWindow": 196608,
                "maxTokens": 32768
              },
              {
                "id": "glm-5",
                "name": "glm-5",
                "reasoning": false,
                "input": ["text"],
                "cost": { "input": 0, "output": 0, "cacheRead": 0, "cacheWrite": 0 },
                "contextWindow": 202752,
                "maxTokens": 16384,
                "compat": {
                  "thinkingFormat": "qwen"
                }
              },
              {
                "id": "glm-4.7",
                "name": "glm-4.7",
                "reasoning": false,
                "input": ["text"],
                "cost": { "input": 0, "output": 0, "cacheRead": 0, "cacheWrite": 0 },
                "contextWindow": 202752,
                "maxTokens": 16384,
                "compat": {
                  "thinkingFormat": "qwen"
                }
              },
              {
                "id": "kimi-k2.5",
                "name": "kimi-k2.5",
                "reasoning": false,
                "input": ["text", "image"],
                "cost": { "input": 0, "output": 0, "cacheRead": 0, "cacheWrite": 0 },
                "contextWindow": 262144,
                "maxTokens": 32768,
                "compat": {
                  "thinkingFormat": "qwen"
                }
              }
            ]
          }
        }
      },
      "agents": {
        "defaults": {
          "model": {
            "primary": "bailian/qwen3.5-plus"
          },
          "models": {
            "bailian/qwen3.5-plus": {},
            "bailian/qwen3-max-2026-01-23": {},
            "bailian/qwen3-coder-next": {},
            "bailian/qwen3-coder-plus": {},
            "bailian/MiniMax-M2.5": {},
            "bailian/glm-5": {},
            "bailian/glm-4.7": {},
            "bailian/kimi-k2.5": {}
          }
        }
      },
      "gateway": {
        "mode": "local"
      }
    }
  3. Save the file and exit. Run the following command to apply the configuration.

    openclaw gateway restart

Modify configuration file via web browser

  1. Run the following command in your terminal. Your browser will automatically open the OpenClaw dashboard (typically at http://127.0.0.1/:xxxx), where you can manage conversations and configurations.

    openclaw dashboard
  2. In the left menu, select Config > RAW (or Config > RAW).

    1. Initial configuration: Copy the following content into the Raw JSON5 input box, replacing existing content.

      Existing configuration: To preserve existing settings, do not replace the entire file. See How to safely modify an existing configuration?

    2. Replace YOUR_API_KEY with your Coding Plan-specific API Key.

      image

    {
      "models": {
        "mode": "merge",
        "providers": {
          "bailian": {
            "baseUrl": "https://coding-intl.dashscope.aliyuncs.com/v1",
            "apiKey": "YOUR_API_KEY",
            "api": "openai-completions",
            "models": [
              {
                "id": "qwen3.5-plus",
                "name": "qwen3.5-plus",
                "reasoning": false,
                "input": ["text", "image"],
                "cost": { "input": 0, "output": 0, "cacheRead": 0, "cacheWrite": 0 },
                "contextWindow": 1000000,
                "maxTokens": 65536,
                "compat": {
                  "thinkingFormat": "qwen"
                }
              },
              {
                "id": "qwen3-max-2026-01-23",
                "name": "qwen3-max-2026-01-23",
                "reasoning": false,
                "input": ["text"],
                "cost": { "input": 0, "output": 0, "cacheRead": 0, "cacheWrite": 0 },
                "contextWindow": 262144,
                "maxTokens": 65536,
                "compat": {
                  "thinkingFormat": "qwen"
                }
              },
              {
                "id": "qwen3-coder-next",
                "name": "qwen3-coder-next",
                "reasoning": false,
                "input": ["text"],
                "cost": { "input": 0, "output": 0, "cacheRead": 0, "cacheWrite": 0 },
                "contextWindow": 262144,
                "maxTokens": 65536
              },
              {
                "id": "qwen3-coder-plus",
                "name": "qwen3-coder-plus",
                "reasoning": false,
                "input": ["text"],
                "cost": { "input": 0, "output": 0, "cacheRead": 0, "cacheWrite": 0 },
                "contextWindow": 1000000,
                "maxTokens": 65536
              },
              {
                "id": "MiniMax-M2.5",
                "name": "MiniMax-M2.5",
                "reasoning": false,
                "input": ["text"],
                "cost": { "input": 0, "output": 0, "cacheRead": 0, "cacheWrite": 0 },
                "contextWindow": 196608,
                "maxTokens": 32768
              },
              {
                "id": "glm-5",
                "name": "glm-5",
                "reasoning": false,
                "input": ["text"],
                "cost": { "input": 0, "output": 0, "cacheRead": 0, "cacheWrite": 0 },
                "contextWindow": 202752,
                "maxTokens": 16384,
                "compat": {
                  "thinkingFormat": "qwen"
                }
              },
              {
                "id": "glm-4.7",
                "name": "glm-4.7",
                "reasoning": false,
                "input": ["text"],
                "cost": { "input": 0, "output": 0, "cacheRead": 0, "cacheWrite": 0 },
                "contextWindow": 202752,
                "maxTokens": 16384,
                "compat": {
                  "thinkingFormat": "qwen"
                }
              },
              {
                "id": "kimi-k2.5",
                "name": "kimi-k2.5",
                "reasoning": false,
                "input": ["text", "image"],
                "cost": { "input": 0, "output": 0, "cacheRead": 0, "cacheWrite": 0 },
                "contextWindow": 262144,
                "maxTokens": 32768,
                "compat": {
                  "thinkingFormat": "qwen"
                }
              }
            ]
          }
        }
      },
      "agents": {
        "defaults": {
          "model": {
            "primary": "bailian/qwen3.5-plus"
          },
          "models": {
            "bailian/qwen3.5-plus": {},
            "bailian/qwen3-max-2026-01-23": {},
            "bailian/qwen3-coder-next": {},
            "bailian/qwen3-coder-plus": {},
            "bailian/MiniMax-M2.5": {},
            "bailian/glm-5": {},
            "bailian/glm-4.7": {},
            "bailian/kimi-k2.5": {}
          }
        }
      },
      "gateway": {
        "mode": "local"
      }
    }
  3. Click Save in the top-right corner, then click Update to apply the configuration.

    After successful saving, the apiKey displays as “__OPENCLAW_REDACTED__”. This redaction protects sensitive data in the frontend interface only and does not affect actual API calls.

    image

Use OpenClaw

You can use OpenClaw through a web browser or terminal command line.

Web browser

  1. Open a new terminal and run the following command. Your browser will automatically open the OpenClaw dashboard.

    openclaw dashboard
  2. Start a conversation.

    image

Terminal command line

  1. Open a new terminal and run the following command.

    openclaw tui
  2. Start a conversation.

    image

Common commands

Command

Description

Example

/help

Show a quick summary of available commands.

/help

/status

View current model, session, gateway, and other status information.

/status

/model <model name>

Switch the model used in the current session.

/model qwen3.5-plus

/new

Start a new session.

/new

/compact

Compress conversation history to free up context window space.

/compact

/think <level>

Set thinking (reasoning) depth level. Options include off, low, medium, high.

/think high

/skills

Show all available Skills.

/skills

Switch models

  • Switch model in current session (temporary)

    In your terminal, run openclaw tui to enter the OpenClaw terminal command line. Use /model <model name> to switch models in the current session.

    /model qwen3-coder-next
    The interface returns “model set to qwen3-coder-next” when active.
  • Switch default model (permanent)

    To use a specific model in every new session, modify the agents.defaults.model.primary field to your target model. See Modify configuration file.

    {
        "agents": {
            "defaults": {
                "model": {
                    "primary": "bailian/qwen3.5-plus"
                }
            }
        }
    }

Connect messaging channels

Telegram

Step 1: Configure Telegram robot

  1. Create a robot using BotFather

    Send the /newbot command. Follow the prompts to enter a robot name and username (the username must end with bot). Copy and save the returned Bot Token (format: 123456789:ABCdefGHIjklMNOpqrsTUVwxyz).

    image

  2. In an OpenClaw conversation, enter the following content, replacing xxxx with your actual Bot Token. OpenClaw automatically completes configuration.

    Help me configure Telegram with the following settings. My Bot Token is xxxx.
    {
      "channels": {
        "telegram": {
          "enabled": true,
          "botToken": "xxxx",
          "dmPolicy": "pairing"  
        }
      }
    }
  3. After configuration completes, restart the gateway.

    openclaw gateway restart
  4. Send a message to the robot in Telegram. The first message receives a pairing code.

    image

  5. Run the following command in your terminal, replacing xxx with your actual pairing code.

    openclaw pairing approve telegram xxx

Step 2: Test

  1. Run the following command in your terminal to restart the gateway.

    openclaw gateway restart
  2. Run the following command to check the Telegram channel status.

    openclaw status

    In the Channels section, Telegram should show as ON with status OK.

  3. Send a test message in Telegram.

    image

Learn more

Skill

Skills are extensible capability modules. Agents automatically match and load the appropriate Skill based on requests. OpenClaw supports viewing and enabling built-in Skills, installing community Skills from ClawHub, or creating custom Skills.

View existing Skills

  1. Run the following command to view installed Skills and their status.

    # List installed Skills
    openclaw skills list
    
    # Check Skill status (enabled, disabled, missing dependencies, etc.)
    openclaw skills check
    
    # View details for a specific Skill
    openclaw skills info <skill-name>
  2. Built-in Skills are disabled by default. Enable them in ~/.openclaw/openclaw.json using the skills.allowBundled whitelist. Only Skills listed here load.

    {
      "skills": {
        "allowBundled": [
          "github",
          "weather",
          "summarize",
          "coding-agent",
          "clawhub",
          "nano-pdf",
          "google-web-search",
          "image-lab"
        ]
      }
    }

    Some built-in Skills require third-party API Keys. Configure them in the skills.entries section of ~/.openclaw/openclaw.json. See the Skills configuration documentation for details.

Find more Skills

Find and install more Skills using either of these methods.

  1. Search and install via ClawHub

    ClawHub offers 3,000+ community Skills. Browse the website or search via command line.

    # Search by keyword
    npx clawhub search [keyword]
    
    # Browse recently updated Skills
    npx clawhub explore

    After finding a suitable Skill, run the following command to install it. Restart the gateway after installation to use it.

    npx clawhub install <skill-name>
  2. Ask directly in OpenClaw

    Describe your needs directly in a conversation, such as Help me find a Skill to check the weather. OpenClaw automatically searches and installs it.

Create custom Skill

  1. Create a Skill directory.

    mkdir -p ~/.openclaw/workspace/skills/my-custom-skill
  2. Create a SKILL.md file in this directory. The file consists of YAML front matter and Markdown instructions. name and description are required. The Agent uses description to decide whether to load the Skill. Ensure the description is accurate.

    ---
    name: my-custom-skill
    description: Brief description
    ---
    
    # My Custom Skill
    
    When the user requests XXX, perform these actions:
    
    1. Use the bash tool to run xxx command
    2. Parse the output
    3. Return results to the user in table format
  3. Restart the gateway to activate the Skill.

    # Restart gateway
    openclaw gateway restart
    
    # Check if Skill is active
    openclaw skills list

See the OpenClaw official documentation for more Skill configuration details.

FAQ

How do I view models configured for Coding Plan?

In your terminal, run openclaw tui to enter the OpenClaw terminal command line. Then run /model to view the model list. Press Enter to select a model, press Esc to exit the model list.

image

What do I do if OpenClaw shows “API rate limit reached”?

Troubleshoot in this order:

  1. OpenClaw configuration error

    If Base URL or model provider configuration is incorrect, requests route to the general API instead of the Coding Plan dedicated channel, triggering rate limits.

    • If using a Coding Plan package, verify the OpenClaw configuration file models, agents, and gateway (including nested fields) match the documentation. For example, the model provider structure should be { "models": { "providers": { "bailian": {...} } } }.

    • If not currently using a Coding Plan package, switch to Coding Plan for dedicated quotas.

  2. Exceeded package quota: Check usage on the Coding Plan page.

    • If quota is exhausted, check the next reset time on that page.

    • If frequently hitting limits, upgrade to the Pro package for more calls.

  3. Try resetting your API Key: If the issue persists after troubleshooting, reset your API Key on the Coding Plan page.

Why do I get "HTTP 401: Incorrect API key provided." or "No API key found for provider xxx"?

Possible causes:

  1. API Key is invalid, expired, empty, malformed, or mismatched with the endpoint environment. Verify the API Key is a Coding Plan package-specific key, fully copied without spaces, and subscription status is active.

  2. OpenClaw historical configuration cache causes errors. Delete the providers configuration in ~/.openclaw/agents/main/agent/models.json and restart OpenClaw.

I already configured DingTalk and other channels. How do I safely add Coding Plan models without losing existing configuration?

  • Do not replace the entire file. Full replacement overwrites your custom configuration. Perform partial modifications instead.

  • Choose one of these methods:

    • If OpenClaw can converse normally: Enter the following instruction directly in OpenClaw to merge configurations.

    • If OpenClaw lacks model configuration or cannot converse: See Qwen Code guided configuration.

    Instruction content (replace YOUR_API_KEY with your actual API Key):

    Help me connect OpenClaw to Coding Plan with these steps:
    1. Open configuration file: ~/.openclaw/openclaw.json
    2. Locate or create the following fields and merge configuration (keep existing settings unchanged; add new fields if missing):
    {
      "models": {
        "mode": "merge",
        "providers": {
          "bailian": {
            "baseUrl": "https://coding-intl.dashscope.aliyuncs.com/v1",
            "apiKey": "YOUR_API_KEY",
            "api": "openai-completions",
            "models": [
              {
                "id": "qwen3.5-plus",
                "name": "qwen3.5-plus",
                "reasoning": false,
                "input": ["text", "image"],
                "cost": { "input": 0, "output": 0, "cacheRead": 0, "cacheWrite": 0 },
                "contextWindow": 1000000,
                "maxTokens": 65536,
                "compat": {
                  "thinkingFormat": "qwen"
                }
              },
              {
                "id": "qwen3-max-2026-01-23",
                "name": "qwen3-max-2026-01-23",
                "reasoning": false,
                "input": ["text"],
                "cost": { "input": 0, "output": 0, "cacheRead": 0, "cacheWrite": 0 },
                "contextWindow": 262144,
                "maxTokens": 65536,
                "compat": {
                  "thinkingFormat": "qwen"
                }
              },
              {
                "id": "qwen3-coder-next",
                "name": "qwen3-coder-next",
                "reasoning": false,
                "input": ["text"],
                "cost": { "input": 0, "output": 0, "cacheRead": 0, "cacheWrite": 0 },
                "contextWindow": 262144,
                "maxTokens": 65536
              },
              {
                "id": "qwen3-coder-plus",
                "name": "qwen3-coder-plus",
                "reasoning": false,
                "input": ["text"],
                "cost": { "input": 0, "output": 0, "cacheRead": 0, "cacheWrite": 0 },
                "contextWindow": 1000000,
                "maxTokens": 65536
              },
              {
                "id": "MiniMax-M2.5",
                "name": "MiniMax-M2.5",
                "reasoning": false,
                "input": ["text"],
                "cost": { "input": 0, "output": 0, "cacheRead": 0, "cacheWrite": 0 },
                "contextWindow": 196608,
                "maxTokens": 32768
              },
              {
                "id": "glm-5",
                "name": "glm-5",
                "reasoning": false,
                "input": ["text"],
                "cost": { "input": 0, "output": 0, "cacheRead": 0, "cacheWrite": 0 },
                "contextWindow": 202752,
                "maxTokens": 16384,
                "compat": {
                  "thinkingFormat": "qwen"
                }
              },
              {
                "id": "glm-4.7",
                "name": "glm-4.7",
                "reasoning": false,
                "input": ["text"],
                "cost": { "input": 0, "output": 0, "cacheRead": 0, "cacheWrite": 0 },
                "contextWindow": 202752,
                "maxTokens": 16384,
                "compat": {
                  "thinkingFormat": "qwen"
                }
              },
              {
                "id": "kimi-k2.5",
                "name": "kimi-k2.5",
                "reasoning": false,
                "input": ["text", "image"],
                "cost": { "input": 0, "output": 0, "cacheRead": 0, "cacheWrite": 0 },
                "contextWindow": 262144,
                "maxTokens": 32768,
                "compat": {
                  "thinkingFormat": "qwen"
                }
              }
            ]
          }
        }
      },
      "agents": {
        "defaults": {
          "model": {
            "primary": "bailian/qwen3.5-plus"
          },
          "models": {
            "bailian/qwen3.5-plus": {},
            "bailian/qwen3-max-2026-01-23": {},
            "bailian/qwen3-coder-next": {},
            "bailian/qwen3-coder-plus": {},
            "bailian/MiniMax-M2.5": {},
            "bailian/glm-5": {},
            "bailian/glm-4.7": {},
            "bailian/kimi-k2.5": {}
          }
        }
      },
      "gateway": {
        "mode": "local"
      }
    } 
    3. Save the configuration file
    4. Run openclaw gateway restart to restart OpenClaw's gateway and apply the configuration.
    After configuration, start a new OpenClaw or Qwen Code session and run openclaw models status to verify it works.

    After restarting the gateway, existing sessions may not work properly. Restart your session.

Note

If you installed OpenClaw via Simple Application Server, you can directly use the graphical interface to add Coding Plan models. See Simple Application Server addition method for details.

See FAQ for more questions.