All Products
Search
Document Center

API Gateway:ai-prompt

Last Updated:May 22, 2025

The ai-prompt plug-in allows you to prepend or append a prompt to a large language model (LLM) request.

Running attributes

Plug-in execution stage: default stage. Plug-in execution priority: 450.

Configuration description

Field

Data type

Required

Default value

Description

prepend

array of message object

optional

-

The content that is prepended to the initial input.

append

array of message object

optional

-

The content that is appended to the initial input.

The following table describes the parameters of a message object.

Field

Data type

Required

Default value

Description

role

string

Yes

-

The role.

content

string

Yes

-

The message.

The following code shows a sample configuration:

prepend:
- role: system
  content: "Answer the question in English."
append:
- role: user
  content: "Each time you answer a question, try to ask a rhetorical question."

Use the preceding configurations to initiate a request:

curl http://localhost/test \
-H "content-type: application/json" \
-d '{
  "model": "gpt-3.5-turbo",
  "messages": [
    {
      "role": "user",
      "content": "Who are you?"
    }
  ]
}

The following code shows a sample processed request:

curl http://localhost/test \
-H "content-type: application/json" \
-d '{
  "model": "gpt-3.5-turbo",
  "messages": [
    {
      "role": "system",
      content: "Answer the question in English."
    },
    {
      "role": "user",
      "content": "Who are you?"
    },
    {
      "role": "user",
      content: "Each time you answer a question, try to ask a rhetorical question."
    }
  ]
}

Carry user geographic locations by combining the AI prompt decorator plug-in and the geo-ip plug-in

If you want to prepend or append user geographic locations in LLM requests, you must keep both the geo-ip plug-in and the AI prompt plug-in enabled. In addition, you must ensure that the geo-ip plug-in have a higher priority than the AI prompt decorator plug-in in the same processing stage. This way, the geo-ip plug-in calculates the user's geographic location based on the user IP address, and then passes the location information to subsequent plug-ins by using a request attribute. For example, in the default phase, the priority of the geo-ip plug-in is 1000, and that of the ai-prompt-decorator plug-in is 500.

Configuration example of a geo-ip plug-in:

ipProtocal: "ipv4"

Configuration example of an ai-prompt-decorator plug-in:

prepend:
- role: system
  content: "The current geographic location of the user is: country:${geo-country}, province:${geo-province}, city:${geo-city}"
append:
- role: user
  content: "Each time you answer a question, try to ask a rhetorical question."

Use the preceding configurations to initiate a request:

curl http://localhost/test \
-H "content-type: application/json" \
-H "x-forwarded-for: 87.254.207.100,4.5.6.7" \
-d '{
  "model": "gpt-3.5-turbo",
  "messages": [
    {
      "role": "user",
      "content": "What is the weather like today?"
    }
  ]
}'

The following code shows a sample processed request:

curl http://localhost/test \
-H "content-type: application/json" \
-H "x-forwarded-for: 87.254.207.100,4.5.6.7" \
-d '{
  "model": "gpt-3.5-turbo",
  "messages": [
    {
      "role": "system",
      content: "The current geographic location of the user is: country: China, province: Beijing, city: Beijing"
    },
    {
      "role": "user",
      "content": "What is the weather like today?"
    },
    {
      "role": "user",
      content: "Each time you answer a question, try to ask a rhetorical question."
    }
  ]
}'