This topic describes how to access a model deployed on Platform for AI (PAI) through an AI gateway.
Prerequisites
You have deployed a model on PAI. For more information, see Deploy a model with one click.
You have created an AI gateway instance. For more information, see Create a gateway instance.
When you access the model using a private endpoint, ensure that the AI gateway instance and the PAI model are in the same VPC.
Procedure
Step 1: Create an AI service
Log on to the AI Gateway console.
In the navigation pane on the left, choose Instance. In the top menu bar, select a region.
On the Instance page, click the target instance ID.
In the navigation pane on the left, choose Service, then click the Services tab.
Click Create Service. In the Create Service panel, configure the following parameters for the AI service:

Configuration Item
Description
Service Source
Select AI Service.
Service Name
Enter a name for the gateway service, such as pai.
Large Model Supplier
Select the large model provider for the AI service. In this example, select PAI-EAS. If you do not have a model deployed on PAI through EAS, see Deploy a model with one click.
Workspace
The PAI workspace. For example,
pai_xqrj0u0t******.EAS Service
Select the name of the model deployed on PAI. For example,
Qwen3-32B.Connection Type
Select the connection type for the model deployed on PAI. Options include Internet and private network. In this example, select Private.
API-KEY
The API key for the model deployed on PAI is automatically obtained.
NoteThis API key is used for identity verification between the AI gateway and the PAI-EAS service. The AI gateway automatically obtains the API key and establishes communication with the PAI-EAS service.
Step 2: Create a Model API
Log on to the AI Gateway console.
In the navigation pane on the left, choose Instance. In the top menu bar, select a region.
On the Instance page, click the target instance ID.
In the navigation pane on the left, choose Model API, then click Create Model API.
In the Create Model API panel, configure the following basic information:
Base Path: Specify the base path for the API.
Services: Select the AI service that you created in Step 1: Create an AI service.

Click OK to create the Model API.
Step 3: Debug the Model API
In the Actions column for the target Model API, click the Debugging button.
In the Route Debugging panel, select the PAI model from the Model selection drop-down list. Then, on the Model Returned tab, you can interact with the large model.
ImportantThe Model Response tab uses the
/v1/chat/completionschat API by default. To use other APIs, select the cURL Command or Row Output tab to debug using curl commands or an SDK.[Example] To call the
completionsAPI using the cURL Command tab, perform the following steps:On the cURL Command tab, copy the code sample provided by AI Gateway.
Replace the
urlin the code sample with/v1/completions.Modify the
data(body)section of the code sample to match the format that is required by/v1/completions:
