Configure base URL and API key in Cursor to use Alibaba Cloud Model Studio models for code writing and chats.
Model configuration
Supported models
Text generation - Qwen | Qwen-Max, Qwen-Plus, Qwen-Flash, Qwen-Turbo, Qwen-Coder, QwQ |
Text generation - Qwen - Open source | Qwen3.5、Qwen3, QwQ – Open source, QwQ - Preview, Qwen2.5, Qwen-Coder |
Recommendations
Deep development and architecture design: Use
qwen3.5-plus,qwen3-max,qwen3-coder-next(combines coding capabilities with response speed), andqwen3-coder-plus. These models are suitable for complex algorithm implementation, system architecture design, and core logic inference.Assisted coding and lightweight tasks: Use
qwen3-coder-nextandqwen3.5-flash. These models are suitable for code completion, explanation, and daily script writing.
Steps
1. Install Cursor
Download and install Cursor from the Cursor official website.
Due to Cursor's limitations, you have to subscribe to Cursor Pro or higher to use custom models. Otherwise, an error will occur: Error message
2. Activate Model Studio
Create an account: If you do not have an Alibaba Cloud account, create one.
If you encounter problems, see Create an Alibaba Cloud account.
Activate Model Studio: Use your Alibaba Cloud account to go to Model Studio. After you read and agree to the Terms of Service, Model Studio is activated automatically. If the Terms of Service do not appear, the service has already been activated.
If a message indicates that you have not completed identity verification, complete identity verification first.
After you activate Model Studio for the first time, you receive a free quota for model inference valid for 90 days. For more information, see the Free quota for new users.
Charges apply if the quota is used up or expires. To avoid these charges, use the Free quota only feature. Actual fees depend on the prices shown in the console and your final bill.
3. Configure models
This document applies only to pay-as-you-go mode. Coding Plan users must use your exclusive base URL and API key instead. For details, see Coding Plan for Cursor.
In Cursor, click the
icon. Click Cursor Settings, then select the Models page.Enable OpenAI API Key. Enter your Model Studio API key.
Enable Override OpenAI Base URL. Enter the endpoint based on your region:
Singapore:
https://dashscope-intl.aliyuncs.com/compatible-mode/v1US (Virginia):
https://dashscope-us.aliyuncs.com/compatible-mode/v1China (Beijing):
https://dashscope.aliyuncs.com/compatible-mode/v1
In Add or search model, enter the model name you want to use. Click Add Custom Model. We recommend choosing models with strong coding capabilities. For supported models in each mode, see Model list.
After configuration, select the configured model in the chat panel to start using it.

FAQ
Q: Cannot use added models in Cursor?
A: If you see one of the following error messages:
The model xxx does not work with your current plan or api key.
Named models unavailable Free plans can only use Auto. Switch to Auto or upgrade plans to continue.
Due to Cursor's limitations, the free plan supports only Auto mode and does not allow custom models. To use models from Model Studio, upgrade to Cursor Pro or higher.
Q: What if I cannot find the added model after configuration?
A: In the chat panel, click and disable Auto mode. Then, select the desired model from the model drop-down list.
Q: Why do I receive the error "We're having trouble connecting to the model provider." or "Unauthorized User API key"?
A: Possible reasons:
The model you invoked does not exist. For supported models in each deployment mode, see Model list.
After you configure the Alibaba Cloud Model Studio base URL and API key, calls to models from other providers will fail. If you want to use models from other providers, reconfigure or disable the OpenAI API Key and Override OpenAI Base URL as needed.
The error might stem from Cursor software's incompatibility with certain models, rather than a configuration or model provider issue.
Q: What if the model response is slow?
A: Response speed is affected by multiple factors. Consider the following:
Choose appropriate models: For simple jobs, use
qwen3-coder-nextorqwen-flashfor faster responses.Check network connectivity: Ensure stable network connectivity. Switch network environments as needed.
Reduce context length: An overly long conversation history increases processing time. To improve response efficiency, start a new conversation.