This topic describes how to use AI Gateway to migrate the Claude Code model to another model.
Use case
If the official Claude Code model is unavailable, you can use AI Gateway to migrate it to a different model service.
AI Gateway offers the following benefits:
Unified access: A single entry point masks the protocol differences between various model providers.
Vendor lock-in avoidance: Avoid being locked into a specific provider.
High availability: Supports failover from Provider A to Provider B if Provider A fails.
On-demand routing: Intelligently routes requests to different providers based on the model name.
Prerequisites
A virtual private cloud (VPC) is created and attached to an Internet NAT gateway. For more information, see VPCs and vSwitches and Public network access.
An AI Gateway instance is required. If you have not created one, see Create a gateway instance.
The Claude Code client is installed.
Obtain the API key from your target large model service provider.
Migration procedure
Step 1. Create an AI service
Log on to the AI Gateway console.
In the navigation pane on the left, choose Instance. In the top menu bar, select a region.
On the Instance page, click the target instance ID.
In the navigation pane on the left, choose Service. Click Create Service. Set Service Source to AI Service.
Under Large Model Supplier, select a model provider. In the Large Model Configuration section, enter the API key and click OK. If needed, you can create services for multiple providers to enable multi-model support and ensure high availability.
ImportantTo prevent your API key from being exposed, store the key in KMS by referencing a credential. For more information, see Reference a key.
Step 2. Create and test a Model API
In the navigation pane on the left, choose Model API. Click Create Model API. For this example, click Create in the Text Generation card.
On the Create Model API panel, configure the basic information. In the Large Model Service section, set the Service Model to Single Service, select the AI service that you created for Service Name, and click OK.
After the API is created, click Test in the upper-right corner. On the Routing Test page, enter a prompt and click Send Request.
A correct response from the model indicates that the Model API correctly routes requests to the specified large model service.
NoteIf you created multiple large model services, you can select Multi-model Service and configure multiple AI services when you create the Model API.
Routing rules support wildcard characters. For example,
qwen*matches all model names that start with "qwen".If you enable Fallback, the system automatically switches to a standby service when the primary service encounters an error or is throttled.
You can configure multiple Fallback services for sequential failover.
After the test, return to the Model API list. In the row for the Model API that you created, click the copy icon in the Instance Access Endpoint column. Use the copied endpoint to set the Claude Code environment variable.
Step 3. Use the specified model in Claude Code
Set environment variables
ANTHROPIC_BASE_URL: Set this to the Model API instance access endpoint that you copied in Step 2.
ANTHROPIC_AUTH_TOKEN: If authentication is enabled for the Model API and a separate API key is configured, enter that API key. If authentication is not enabled, enter any value. This field cannot be empty.
export ANTHROPIC_BASE_URL=http://env-xxxxxx-cn-hangzhou.alicloudapi.com
export ANTHROPIC_AUTH_TOKEN=xxxxStart with the specified model
This example uses the qwen3 model.
# Use the Alibaba Cloud qwen3 model
claude --model qwen3-coder-plusStep 4. Verify the migration
After you start Claude Code, perform the following tests to verify the migration:
Basic conversation test: Enter a simple question to verify that the model responds correctly.
Code generation test: Request code for a specific function to test the model's programming capabilities.
Tool calling test: If you use the MCP tool, verify that the tool calling feature works as expected.
Multi-model switching test: Use the
--modelparameter to switch between different models and verify the routing feature.