This topic describes the error codes and their corresponding messages that you may encounter when using Alibaba Cloud Model Studio.
Error messages
HTTP status code | Code | Message | Description |
- | - | error.AuthenticationError: No api key provided. You can set by dashscope.api_key = your_api_key in code, or you can set it via environment variable DASHSCOPE_API_KEY= your_api_key. | The API key is not provided when using the DashScope SDK. |
- | - | openai.OpenAIError: The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY environment variable | The API key is not provided. You can set the API key as an environment variable, or write the API key in the code (not recommended). |
- | mismatched_model | The model 'xxx' for this request does not match the rest of the batch. Each batch must contain requests for a single model. | In a batch task, all requests must use the same model. Check your input file according to the input format. |
- | duplicate_custom_id | The custom_id 'xxx' for this request is a duplicate of another request. The custom_id parameter must be unique for each request in a batch. | In a batch task, each request ID must be unique. Check your input file according to the input format and ensure all request IDs are not duplicated. |
- | model_not_found | "The provided model 'xxx' is not supported by the Batch API. | The current model does not support batch calls, or the model name is incorrect. Check the supported models and the spelling of their name. |
400 | invalid_parameter_error | current user api does not support http call | The open source Qwen3 models do not support non-streaming output, using Stream instead. |
400 | invalid_parameter_error | This model only support stream mode, please enable the stream parameter to access the model. | The model only supports streaming output. |
400 | InvalidParameter | Required parameter(s) missing or invalid, please check the request parameters. | The request parameters are invalid. |
400 | APIConnectionError | Connection error. | Local network issue. This is usually because of an enabled proxy. Disable or restart the proxy and try again. |
400 | InvalidParameter | Temperature should be in [0.0, 2.0) | The temperature parameter is not within the range of [0.0, 2.0). For the parameter range, see Qwen. |
400 | invalid_request_error | 'temperature' must be Float | The temperature parameter is not within the range of [0.0, 2.0). For the parameter range, see Qwen. |
400 | InvalidParameter | Range of top_p should be (0.0, 1.0] | The top_p parameter is not within the range of (0.0, 1.0]. For the parameter range, see Text generation - Qwen. |
400 | invalid_request_error | 'top_p' must be Float | The top_p parameter is not within the range of (0.0, 1.0]. For the parameter range, see Text generation - Qwen. |
400 | InvalidParameter | Presence_penalty should be in [-2.0, 2.0] | The presence_penalty parameter is not within the range of [-2.0, 2.0]. For the parameter range, see Text generation - Qwen. |
400 | InvalidParameter | Range of max_tokens should be [1, 2000] | The max_tokens parameter is not within the range of [1, Maximum output of the model]. For the output limits of models, see Models. |
400 | InvalidParameter | Range of n should be [1, 4] | The n parameter is not within the range of [1, 4]. For the parameter range, see Text generation - Qwen. |
400 | InvalidParameter | Range of seed should be [0, 9223372036854775807] | When calling LLM API using the DashScope method, the seed parameter is not within the range of [0, 9223372036854775807]. For the parameter range, see Text generation - Qwen. |
400 | invalid_request_error | -1 is lesser than the minimum of 0 - 'seed' or 'seed' must be Integer | When calling LLM API using the OpenAI method, the seed parameter is not within the range of [0, 231-1]. For the parameter range, see Text generation - Qwen. |
400 | InvalidParameter | The "stop" parameter must be of type "str", "list[str]", "list[int]", or "list[list[int]]", and all elements within the list must be of the same type. | The stop parameter does not conform to the "str", "list[str]", "list[int]", or "list[list[int]]" formats. For parameter descriptions, see Text generation - Qwen. |
400 | InvalidParameter | Parameter top_k be greater than or equal to 0 | The top_k parameter cannot be negative. For the parameter range, see Text generation - Qwen. |
400 | InvalidParameter | Repetition_penalty should be greater than 0.0 | The repetition_penalty parameter cannot be negative. For the parameter range, see Text generation - Qwen. |
400 | InvalidParameter | Value error, batch size is invalid, it should not be larger than xx. | When calling a embedding model, the number of input rows exceeds the upper limit. For row limits, see Embedding. |
400 | InvalidParameter | Range of input length should be [1, xxxx] | The input length exceeds the upper limit. For the limits, see Models. |
400 | InvalidParameter | The provided URL does not appear to be valid. Ensure it is correctly formatted. | When using visual understanding and omni-modal models, the URL of the input data is invalid. The URL must start with http://, https://, data:, or file://. If it starts with |
400 | InvalidParameter | Input should be a valid dictionary or instance of GPT3Message | The format of the messages field does not meet the requirements, such as mismatched parentheses or missing necessary key-value pairs. |
400 | invalid_request_error | 'audio' output only support with stream=true | When using Qwen-Omni, stream is not enabled. |
400 | InvalidParameter | Either "prompt" or "messages" must exist and cannot both be none | The prompt and messages fields cannot both be empty. The cause might be a format error. For example, when calling a model using the DashScope SDK, messages must be placed in the input object instead of being parallel to the model parameter.
|
400 | InvalidParameter | 'messages' must contain the word 'json' in some form, to use 'response_format' of type 'json_object'. | When enabling structured output, the System Message or User Message does not guide the model to output in JSON format. Example: "Please output in JSON format." |
400 | InvalidParameter | Too many files provided. | The number of provided file IDs exceeds the limit. Provide no more than 100 file IDs. |
400 | InvalidParameter | File [id:file-fe-***********] exceeds size limit. | The size of the provided file exceeds the limit. Make sure that the file is smaller than 150 MB. |
400 | InvalidParameter | File [id:file-fe-***********] exceeds page limits (15000 pages). | The number of pages in the provided file exceeds the limit. Make sure that the number of pages is less than 15,000. |
400 | InvalidParameter | File [id:file-fe-***********] content blank. | The provided file is empty. Make sure that the file content meets the requirements. |
400 | InvalidParameter | Total message token length exceed model limit (10000000 tokens). | The total input exceeds 10,000,000 tokens. Make sure that the message length meets the requirements. |
400 | DataInspectionFailed data_inspection_failed | Input or output data may contain inappropriate content. Input data may contain inappropriate content. Output data may contain inappropriate content. | Data inspection error. The input or output may contain inappropriate content and is blocked during content moderation. |
400 | BadRequest.EmptyInput | Required input parameter missing from request. | The input cannot be empty. |
400 | BadRequest.EmptyParameters | Required parameter "parameters" missing from request. | The parameters parameter cannot be empty. |
400 | BadRequest.EmptyModel | Required parameter "model" missing from request. | The model parameter cannot be empty. |
400 | InvalidURL | Invalid URL provided in your request. | The request URL is incorrect. |
400 | InvalidParameter | The video modality input does not meet the requirements because: the range of sequence images shoule be (4, 80)/(4,512). | When using the Qwen-VL model for video understanding and passing in as an image list, the number of images does not meet the requirements. For details, see Visual understanding.
|
400 | InvalidParameter | Exceeded limit on max bytes per data-uri item : 10485760'. | When using the OpenAI SDK to pass local image or video to Qwen-VL, QVQ, or Qwen-Omni, the image or video size does not meet the requirements.
|
400 | InvalidParameter | The image length and width do not meet the model restrictions. [absolute aspect ratio must be smaller than 200, got n / m]. The image length and width do not meet the model restrictions. [height:n or width:m must be larger than 10]. | The length and width of the image passed into Qwen-VL do not meet the requirements:
|
400 | InvalidParameter | The file format is illegal and cannot be opened. |
|
400 | InvalidParameter | Failed to download multimodal content. Download the media resource timed out during the data inspection process. | Failed to download the multimodal files or download timed out. Possible causes:
|
400 | InvalidParameter | Don't have authorization to access the media resource during the data inspection process. | Not authorized to access the media file. Possible cause: When calling the model, you passed a signed file URL from OSS and the URL has expired. Make sure you access the file within the validity period of the URL. |
400 | invalid_value | Invalid value: vide. Supported values are: 'text','image_url','video_url' and 'video'. | When calling the model using the OpenAI SDK, the value of the type attribute of the content parameter is invalid. Valid values: text, image_url, video, or video_url. |
400 | InvalidParameter | Invalid video file. | The video file provided is invalid. |
400 | InvalidParameter | The video modality input does not meet the requirements because: The video file is too long. | The video duration exceeds the limit of Qwen-VL or Qwen-Omni:
|
400 | Arrearage | Access denied, please make sure your account is in good standing. |
|
400 | UnsupportedOperation | The operation is unsupported on the referee object. | The referee object does not support this operation. This message may vary based on the actual scenario. |
400 | InvalidSchema | Database schema is invalid for text2sql. | Enter the schema of your database. |
400 | InvalidSchemaFormat | Database schema format is invalid for text2sql. | The schema format of the input data table is invalid. |
400 | FaqRuleBlocked | Input or output data is blocked by faq rule. | The input or output is blocked by FAQ rules. |
400 | CustomRoleBlocked | Input or output data may contain inappropriate content with custom rule. | The input or output is blocked by custom rules. |
400 | InternalError.Algo | Missing Content-Length of multimodal url. | The response header of the URL request lacks |
400 | InvalidParameter | Wrong Content-Type of multimodal url |
|
400 | InvalidParameter | The content field is a required field. | The content parameter is not configured when calling the model using the Java SDK. |
400 | invalid_request_error | Payload Too Large. | When executing a batch task, the size of the uploaded JSONL file exceeds the upper limit. To upload the file, check and ensure that the file size meets the limit, or consider splitting the file into multiple smaller files. For the JSONL file format, see Input file format. |
401 | InvalidApiKey invalid_api_key | Invalid API-key provided. Incorrect API key provided. |
|
403 | AccessDenied access_denied | Access denied. | You are not authorized to access this API. For example, you are not in the invitational preview. Go to the Model Studio console and under the corresponding model card in Models, click Apply Now. |
403 | Workspace.AccessDenied | Workspace access denied. | You are not authorized to access the applications or models in this workspace.
|
403 | Model.AccessDenied | Model access denied. | The RAM user is not authorized to access models in this workspace. Perform the following steps:
For more information, see Call a model in a sub-workspace. |
403 | AccessDenied.Unpurchased | Access to model denied. Please make sure you are eligible for using the model. | You may not have activated the Model Studio service: You need to register or log on to your Alibaba Cloud account, and then go to Models to activate Model Studio. |
404 | WorkSpaceNotFound | WorkSpace can not be found. | The workspace that you specify does not exist. |
404 | ModelNotFound model_not_found | Model can not be found. The model xx does not exist. | The model that you specify does not exist. |
404 | ModelNotFound model_not_found | The model xx does not exist or you do not have access to it. | You have not activated Model Studio yet. You need to go to Model Market to activate the model service. |
408 | RequestTimeOut |
|
|
413 | BadRequest.TooLarge | Payload Too Large. | The gateway at the access layer returns an error that the request body is too large. If the error is returned by the Microservice Engine (MSE) gateway, no code is returned and the message cannot be customized. If the error is returned by the RESTful gateway, a code is returned. |
415 | BadRequest.InputDownloadFailed | Failed to download the input file: xxx. | Failed to download the input file, which may be due to download timeout, download failure, or file size exceeding the limit. The message may include additional details. |
415 | BadRequest.UnsupportedFileFormat | The format of the input file is not supported. | The format of the input file is not supported. |
429 | Throttling | Requests throttling triggered. | The API request triggers throttling. |
429 | Throttling.RateQuota | Requests rate limit exceeded, please try again later. | The frequency of calls triggers throttling threshold, such as the number of requests per second. We recommend waiting for a period of time before trying to call again. |
429 | Throttling.AllocationQuota insufficient_quota | Allocated quota exceeded, please increase your quota limit. You exceeded your current quota, please check your plan and billing details. | The number of requests triggers throttling, such as the number of tokens generated per minute. |
429 | LimitRequests limit_requests | You exceeded your current requests list | You exceed the throttling threshold. You can make requests again after falling down below the throttling threshold. |
429 | Throttling.AllocationQuota | Free allocated quota exceeded. | The free quota has expired or been exhausted, and the model does not support paid access. |
429 | PrepaidBillOverdue | The prepaid bill is overdue. | Subscription of this workspace is overdue. |
429 | PostpaidBillOverdue | The postpaid bill is overdue. | The model inference service is overdue. |
429 | CommodityNotPurchased | Commodity has not purchased yet. | The service is not activated in this workspace. |
500 | InternalError internal_error | An internal error has occured, please try again later or contact service support. | An internal error occurred. If you are using the Qwen-Omni, you need to use streaming output mode. |
500 | InternalError.Algo | An internal error has occured during execution, please try again later or contact service support. | An internal algorithm error occurred. |
500 | InternalError.Algo | Role must be in [user, assistant] | When using the Qwen-MT model, ensure that the messages array contains only one element, which must be a User Message. |
500 | SystemError system_error | An system error has occured, please try again later. | A system error occurred. |
500 | InternalError.Timeout | An internal timeout error has occured during execution, please try again later or contact service support. | The asynchronous task waits for 3 hours after being submitted from the gateway to the algorithm service layer. If no result is returned, the task times out. |
500 | RewriteFailed | Failed to rewrite content for prompt. | Prompt rewriting failed. |
500 | RetrivalFailed | Failed to retrieve data from documents. | Document retrieval failed. |
500 | AppProcessFailed | Failed to proceed application request. | Application flow processing failed. |
500 | ModelServiceFailed | Failed to request model service. | Model call failed. |
500 | InvokePluginFailed | Failed to invoke plugin. | Plug-in call failed. |
503 | ModelUnavailable | Model is unavailable, please try again later. | The model is temporarily unavailable for service. |
503 | ModelServingError | Too many requests. Your requests are being throttled due to system capacity limits. Please try again later. | Our network resources are currently saturated and cannot process your request at this time. You can try again later. |
NetworkError network_error | Can not find api-key. | The environment variable configuration is not effective. You can restart the client or IDE and try again. For more information, see FAQ. |
Response parameters
In case of errors, an HTTP status code and a message that contains the details are returned. Sample response:
{
"request_id": "54dc32fd-968b-4aed-b6a8-ae63d6fda4d5",
"code": "InvalidApiKey",
"message": "The API key in your request is invalid."
}
Parameters description
Parameters | Type | Description |
HTTP status code | integer | The status code 200 indicates that the request is successful. Other status codes indicate that the request failed. |
request_id | string | The request ID. You can identify a call based on the request_id parameter during troubleshooting. |
code | string | The error code returned in case of failure, see the Code column of the preceding table. |
message | string | The message returned in case of failure, see the Message column of the preceding table. Take note that the message may vary based on the actual scenario and may contain more specific information that is different from the preceding table. |