All Products
Search
Document Center

Microservices Engine:AI prompt

Last Updated:Jan 07, 2025

This topic describes the AI prompt plug-in that allows you to prepend or append prompts to the requests of a large language model (LLM).

Running attributes

Plug-in execution stage: default stage. Plug-in execution priority: 450.

Configuration description

Name

Data type

Required

Default value

Description

prepend

array of message object

No

-

The statement that is prepended to the initial input.

append

array of message object

No

-

The statement that is appended to the initial input.

The following table describes the parameters of the message object.

Name

Data type

Required

Default value

Description

role

string

Yes

-

The role.

content

string

Yes

-

The message.

The following code shows a sample configuration.

prepend:
- role: system
  content: "Please answer the question in English"
append:
- role: user
  content: "After answering each question, try to ask a follow-up question"

Use the preceding configuration to initiate a request.

curl http://localhost/test \
-H "content-type: application/json" \
-d '{
  "model": "gpt-3.5-turbo",
  "messages": [
    {
      "role": "user",
      "content": "Who are you?"
    }
  ]
}'

The following code shows the content of the actual request after the plug-in processing.

curl http://localhost/test \
-H "content-type: application/json" \
-d '{
  "model": "gpt-3.5-turbo",
  "messages": [
    {
      "role": "system",
      "content": "Please answer the question in English"
    },
    {
      "role": "user",
      "content": "Who are you?"
    },
    {
      "role": "user",
      "content": "After answering each question, try to ask a follow-up question"
    }
  ]
}'

Enable the ai-prompt-decorator plug-in to carry user location information based on the geo-ip plug-in

To prepend or append user location information to LLM requests, you must make sure that you activate both the geo-ip and ai-prompt-decorator plug-ins. In the same request processing stage, the geo-ip plug-in must have a higher priority than the ai-prompt-decorator plug-in. The geo-ip plug-in first calculates the user location information based on the IP addresses of the users and transfers the information to subsequent plug-ins by using request attributes. For example, during the default stage, the priority value of the geo-ip plug-in is set to 1000, and the priority value of the ai-prompt-decorator plug-in is set to 500.

The following code shows the sample configuration of the geo-ip plug-in.

ipProtocal: "ipv4"

The following code shows the sample configuration of the ai-prompt-decorator plug-in.

prepend:
- role: system
  content: "The current location information of the user is Country: ${geo-country}, Province: ${geo-province}, City: ${geo-city}"
append:
- role: user
  content: "After answering each question, try to ask a follow-up question"

Use the preceding configuration to initiate a request.

curl http://localhost/test \
-H "content-type: application/json" \
-H "x-forwarded-for: 87.254.207.100,4.5.6.7" \
-d '{
  "model": "gpt-3.5-turbo",
  "messages": [
    {
      "role": "user",
      "content": "How is the weather today?"
    }
  ]
}'

The following code shows the content of the actual request after the plug-in processing.

curl http://localhost/test \
-H "content-type: application/json" \
-H "x-forwarded-for: 87.254.207.100,4.5.6.7" \
-d '{
  "model": "gpt-3.5-turbo",
  "messages": [
    {
      "role": "system",
      "content": "The current location information of the user is Country: China, Province: Beijing, City: Beijing"
    },
    {
      "role": "user",
      "content": "How is the weather today?"
    },
    {
      "role": "user",
      "content": "After answering each question, try to ask a follow-up question"
    }
  ]
}'