Ɨ
Community Blog How to Configure Model Studio API on OpenClaw (Moltbot/Clawdbot)

How to Configure Model Studio API on OpenClaw (Moltbot/Clawdbot)

Suitable for users who want to configure Model Studio API for Moltbot/Clawdbot

By Zhihe from Model Studio

Model Studio AI Coding Plan now availble for OpenClaw (Moltbot/Clawdbot) and can be used to offset usage of the Qwen3-Max-Thinking trillion-parameter reasoning large model (model name: qwen3-max-2026-01-23)
Start for as low as $5. šŸ‘‰šŸ» Subscribe Now

šŸ¦ž About OpenClaw

OpenClaw (Moltbot/Clawdbot) is aĀ personal AI assistantĀ you run on your own devices.

ā— It answers you on the channels you already use (WhatsApp, Telegram, Slack, Discord, Google Chat, Signal, iMessage, Microsoft Teams, WebChat), plus extension channels like BlueBubbles, Matrix, Zalo, and Zalo Personal.

ā— It can speak and listen on macOS/iOS/Android, and can render a live Canvas you control.

The Gateway is just the control plane — the product is the assistant. If you want a personal, single-user assistant that feels local, fast, and always-on, this is it.

šŸ’» Install Moltbot and Configure Model Studio API

Clawdbot was officially renamed to Moltbot on January 27, 2026.

Moltbot was officially renamed to OpenClaw on January 30 ,2026.

After this date, users who install using https://molt.bot/install.sh and encounter errors like "zsh: command not found" may need to replace the original "clawdbot" commands in this tutorial with "moltbot".

1) Install and Configure Moltbot

Prerequisites

Verify your Node.js version. Moltbot requires Node >=22. If you're not running version 22 or higher, upgrade Node first (you can use nvm, fnm, or brew).

node -v

One-line Installation of Moltbot

According to Moltbot's installation guide, you can choose different installation methods.

For macOS/Linux users, we recommend the official one-line CLI installation:

curl -fsSL https://molt.bot/install.sh | bash

Windows (PowerShell):

iwr -useb https://molt.bot/install.ps1 | iex

Alternatively, you can install globally via npm or pnpm:

npm install -g moltbot@latest
pnpm add -g moltbot@latest

After the script completes, you'll see the clawdbot onboarding instructions, indicating a successful installation šŸ‘‡šŸ»

1

Complete Moltbot (Clawdbot) Configuration

Reference Configuration Your Selection
I understand this is powerful and inherently risky. Continue? Select "Yes"
Onboarding mode Select "QuickStart"
Model/auth provider Select "Skip for now" (can be configured later)
The model provider Qwen (OAuth) in the installation wizard refers to Qwen Chat login. After registration and login, you can access the OAuth flow for Qwen Coder and Qwen Vision models at the free tier (2,000 requests per day, subject to Qwen rate limits).
Filter models by provider Select "All providers"
Default model Use default configuration
Select channel (QuickStart) Select "Skip for now" (can be configured later)
Configure skills now? (recommended) Select "No" (can be configured later)

2) Obtain Model Studio API Key

Moltbot supports using models.providers (or models.json) to add custom model providers or OpenAI/Anthropic-compatible proxy services.

Model Studio's model API supports OpenAI-compatible interfaces. If you are using AI coding plan:

2

Or you can choose other models, you just need to log in to the Model Studio Console and prepare:

How to Obtain API Key?

Log in to the Model Studio Console, navigate to the right top corner setting page, then go to API Key, click Create API Key, and copy it.

When you use the model service in Alibaba Cloud Model Studio, you must select aĀ regionĀ andĀ deployment mode. These choices affect the service'sĀ response speed, cost, available models, andĀ default rate limits. Region Selection

ā— Region:Ā Determines the endpoint (base URL) of your model service and where static data, such as prompts and model outputs, is stored.

ā— Deployment mode:Ā Determines the region where model inference is performed.

3

3) Configure API Key as Environment Variable

We recommend configuring the API key as an environment variable to avoid explicitly exposing it in code and reduce the risk of leakage.

a. Execute the following command in your terminal to check your default shell type:

echo $SHELL

b. Based on your default shell type, choose either zsh or bash:

zsh bash
1 Execute the following command to append the environment variable to your ~/.zshrc file:
# Replace YOUR_DASHSCOPE_API_KEY with your actual Alibaba Cloud Model Studio API Key
echo "export DASHSCOPE_API_KEY='YOUR_DASHSCOPE_API_KEY'" >> ~/.zshrc
Execute the following command to append the environment variable to your ~/.bash_profile file:
# Replace YOUR_DASHSCOPE_API_KEY with your actual Alibaba Cloud Model Studio API Key
echo "export DASHSCOPE_API_KEY='YOUR_DASHSCOPE_API_KEY'" >> ~/.bash_profile
2 Execute the following command to apply the changes:
source ~/.zshrc
Execute the following command to apply the changes:
source ~/.bash_profile
3 Open a new terminal window and run the following command to verify the environment variable:
echo $DASHSCOPE_API_KEY
Open a new terminal window and run the following command to verify the environment variable:
echo $DASHSCOPE_API_KEY

4)Modify Moltbot Configuration File

Note: Moltbot configuration is strictly validated. Incorrect or extra fields may cause the Gateway to fail to start. If you encounter errors, run moltbot doctor (or clawdbot doctor) to check the error messages.

Moltbot uses the provider/model format for model references. We need to add model information to the configuration in a way that Moltbot can parse. You can choose either the Web UI method or manually edit the configuration file.

Web UI

# For installations before January 27, 2026 (clawdbot)
clawdbot dashboard

# For installations after January 27, 2026 (moltbot)
moltbot dashboard

4

Manual Editing

Alternatively, manually edit the configuration file at ~/.moltbot/moltbot.json (or ~/.clawdbot/clawdbot.json for older installations). Here's an example configuration for Model Studio's newly released qwen3-max-2026-01-23 model (released on January 26, 2026):

{
  agents: {
    defaults: {
      model: { primary: "modelstudio/qwen3-max-2026-01-23" },
      models: {
        "modelstudio/qwen3-max-2026-01-23": { alias: "Qwen Max Thinking" }
      }
    }
  },
  models: {
    mode: "merge",
    providers: {
      modelstudio: {
        baseUrl: "https://dashscope-intl.aliyuncs.com/compatible-mode/v1",
        apiKey: "${DASHSCOPE_API_KEY}",
        api: "openai-completions",
        models: [
          {
            id: "qwen3-max-2026-01-23",
            name: "Qwen3 Max Thinking",
            reasoning: false,
            input: ["text"],
            cost: { input: 0.0012, output: 0.006, cacheRead: 0, cacheWrite: 0 },
            contextWindow: 262144,
            maxTokens: 32768
          }
        ]
      }
    }
  }
}

If you're using nano to edit, execute the following command to open the configuration file and paste the content:

# For installations before January 27, 2026 (clawdbot)
nano ~/.clawdbot/clawdbot.json

# For installations after January 27, 2026 (moltbot)
nano ~/.moltbot/moltbot.json

After pasting the configuration, in the nano editor, press Ctrl + X, then Y, and finally Enter to save and close the file.

5

Of course, if your use case is simple and doesn't involve complex agent tool calls, you can also use qwen-plus model:

{
  agents: {
    defaults: {
      model: { primary: "modelstudio/qwen-plus" },
      models: {
        "modelstudio/qwen-plus": { alias: "Qwen Plus" }
      }
    }
  },
  models: {
    mode: "merge",
    providers: {
      bailian: {
        baseUrl: "https://dashscope-intl.aliyuncs.com/compatible-mode/v1",
        apiKey: "${DASHSCOPE_API_KEY}",
        api: "openai-completions",
        models: [
          {
            id: "qwen-plus",
            name: "Qwen Plus",
            reasoning: false,
            input: ["text"],
            cost: { input: 0.0004, output: 0.0012, cacheRead: 0, cacheWrite: 0 },
            contextWindow: 262144,
            maxTokens: 32000
          }
        ]
      }
    }
  }
}

5) Apply Configuration and Verify Model Status

Run the following commands in your terminal to ensure the configuration takes effect:

# Method 1: Stop and start the service
clawdbot gateway stop
# Wait 2-3 seconds, then start the service
clawdbot gateway start

# Method 2: Use the restart command directly
clawdbot gateway restart

You can also use the following command to check if the model you just configured is recognized by Clawdbot:

clawdbot models list

You can also perform a real connectivity test (this will send actual requests and may incur costs):

clawdbot models status --probe

6

šŸ’¬ Start Chatting

If you want to verify the model's response before connecting to chat applications like Discord or Telegram, you can directly launch the WebUI interface to start chattingšŸ‘‡šŸ»

# For installations before January 27, 2026 (clawdbot)
clawdbot dashboard

# For installations after January 27, 2026 (moltbot)
moltbot dashboard

Or run an agent conversation in CLI:

clawdbot agent --agent main --message "Introduce Qwen3 Max capabilities"

Or you could try open source chat ui, such as:
https://github.com/agentscope-ai/agentscope-spark-design/tree/main/packages/clawd-chat-ui

🌱 Model Selection

Alibaba Cloud Model Studio provides model services in theĀ Singapore,Ā Virginia, andĀ BeijingĀ regions. Each region has a different API key. Accessing the service from a nearby region reduces network latency. For more information, seeĀ Select a deployment mode.

Supported regions

Region Name Region ID Static data storage location
Singapore ap-southeast-1 Singapore
US (Virginia) us-east-1 Virginia
China (Beijing) cn-beijing Beijing

When you call a model using an API or SDK, use the model service endpoint that corresponds to theĀ region, seeĀ Qwen API Reference.

Select suitbale model for your task

Model Name Suitable Scenario Pricing
qwen3-max-2026-01-23 The best-performing model in the Qwen series, suitable for handling complex, multi-step tasks. qwen3-max-2026-01-23 supportsĀ calling built-in toolsĀ to achieve higher accuracy when solving complex problems. 7
qwen-plus Balances performance, speed, and cost, making it theĀ recommended choiceĀ for most scenarios. 8
0 1 0
Share on

Alibaba Cloud Community

1,331 posts | 464 followers

You may also like

Comments