All Products
Search
Document Center

AI Coding Assistant Lingma:Configure filters

Last Updated:Apr 24, 2025

In an organization, both the Lingma administrator and the global administrator can configure security filtering policies for chat, code completion, and knowledge base upload. These settings are applicable to enterprise developers. Note that the role of global administrator is specific to Lingma Enterprise Dedicated Edition. This topic describes how to configure a filter.

Who can use this feature?

  • Editions: Lingma Enterprise Dedicated

  • Roles: Lingma admins and global admins in an organization

Before you start

Make sure that the Lingma plugin is upgraded to version 1.4.0 or later. This ensures that the specified filter configurations take effect.

How to configure a filter

  1. Log on to the Lingma console as the Lingma admin or global admin account in your organization. In the left-side navigation pane, choose Filter & Model > Filter.

  2. Select the Chat, Code Completion, or Knowledge Base Upload option based on your filtering scenario. Note that the Knowledge Base Upload option is available only in Enterprise Dedicated Edition.

  3. Enable the filter that suits your business requirements, and then configure the required parameters. The following table provides a mapping between scenarios and filter types.

    Scenario

    Filter type

    Description

    Chat

    Model input filter

    Once enabled, the model input filter will be supported in chat.

    Model output filter

    Code completion

    Model input filter

    Once enabled, the model input filter will be supported in code completion.

    Model output filter

    Knowledge base upload

    Model input filter

    Once enabled, the model input filter scans the knowledge base. Only the passed files are stored.

    Important
    • After the chat filter or code completion filter is enabled or modified, it may take 5 to 10 minutes for these changes to take effect in the Lingma plugin.

    • Once enabled or modified, the knowledge base filter immediately takes effect, and it begins filtering knowledge base files upon uploaded.

Configure the model input filter

Method 1: Regular expression (Regex)

Note

During the expression configuration phase, administrators must fully test and validate regular expressions. Proper validation is crucial to avoid performance degradation or other potential issues before developers start using the Lingma plugin.

  • Processing Mode: Configure the filter by using regular expressions. The following table describes available modes.

    Bypass

    After a regular expression is matched, data is not processed.

    Block

    After a regular expression is matched, model requests are directly blocked.

    Replace

    After a regular expression is matched, replace the current content with the specified configurations.

  • Notifications: Enable the notification feature to receive alerts. Notifications can be pushed to a platform through a webhook.

  • Execution Order: The execution sequence is determined by the order specified in configurations.

  • Maximum Number of Regular Expressions: A maximum of 10 regular expressions can be added.

  • Regular Expression Standard: Regular expression configurations follow the ECMAScript standard. They support common flags such as i (case-insensitive), g (globally matched), and s (DOTALL mode).

  • Sample regular expression configurations:

    Rule name

    Regular expression

    Replaced by

    Original text

    After replacement

    ID card number

    (?<pre>.*)(\d{15})((\d{2})([0-9Xx]))(?<post>.*)

    $<pre>***$<post>

    ID card number: 330204197709022312.

    ID card number: ***.

    Email address

    \w+([-+.]\w+)*@\w+([-.]\w+)*\.\w+([-.]\w+)*

    ***

    My email address is lin***@aliyunmail.com.

    My email address is ***.

    Password

    (.*password=)([\w\d]+)(.*)

    $1***$3

    {password=1213213}

    {password=***}

Method 2: Custom script

Enterprise Dedicated Edition allows you to configure the filter by customizing a script, enabling tailored model input filtering for complex scenarios. Procedure:

Step 1: Develop a script

Use the TypeScript language to create your script. Perform the following operations to create your script:

  1. Download a Template Repository: Obtain the template repository from lingma-extension-template. The template repository includes all necessary scaffolding required for script development. Be sure to carefully read the README.md file and review the sample code.

  2. Implement the Model Input API. Your task is to implement an API named RequestPreHandler for model input filtering. Refer to Custom script API to understand how to define the API. The following code shows an example on how to implement the SensitiveContentFilter.ts file.

    /**
     * Define the sensitive content filter. The filter implements the sensitive information preprocessing of the data input to the model.
     */
    export const sensitiveContentFilter: RequestPreHandler = {
      handle: async (request: RawRequest, SDKTool: LingmaSDKTool) => {
        const dataMap = PayloadUtil.getPayloadData(request.payload);
        for (const [key, value] of dataMap.entries()) {
          if (value.includes('password')) {
            return ResultUtil.buildBlockResult ('Content contains password');
          }
        }
        // If you want to perform differentiated processing for different actions, refer to the following code.
        switch (request.action) {
          case ActionEnum.COMPLETION:
            // do something
            break;
          case ActionEnum.CODE_PROBLEM_SOLVE:
            // do something
            break;
          default:
            return ResultUtil.buildNoOpsResult();
        }
        return ResultUtil.buildNoOpsResult();
      },
    };
  3. Run and Test your Script: Execute the main method to run your debugging code and verify whether the script runs as expected. The following table describes the specific procedures that must be followed during testing.

    Step 1

    Edit the src/index.ts file, modify the main function, and adjust the debugged code. Sample code:

    async function main() {
      const value1 = ['password=123', 'abc'];
      const value2 = 'hello world';
      const dataMap = new Map<PayloadDataKeyEnum, PayloadDataValueType>();
      dataMap.set(PayloadDataKeyEnum.SELECTED_CODE, value1);
      dataMap.set(PayloadDataKeyEnum.USER_INPUT, value2);
    
      const mockRequest: RawRequest = {
        action: ActionEnum.CODE_GENERATE_COMMENT,
        payload: {
          associatedContexts: [],
          data: dataMap,
        },
        requestId: '123',
      };
      const response = await sensitiveContentFilter.handle(mockRequest, SDKTool);
      console.log(response);
    }

    Step 2

    Open the code file that you want to debug in VS Code and set a breakpoint. Select the program startup icon from the debug view, and then click the Run button.

    image

Step 2: Compile and build the script

Compile a TypeScript file that has passed debugging into a JavaScript file. For example, you can compile the SensitiveContentFilter.ts file into the SensitiveContentFilter.js file. Procedure:

  1. Open the configuration file named src/build.js. Then, modify the entryPoints and outfile parameters. Specify the path of the TypeScript file to be compiled and built in the entryPoints parameter and the output path for the corresponding JavaScript file in the outfile parameter.

  2. Run the node build.js command in the root directory of your code repository. Upon execution, the corresponding JavaScript file is generated at the path specified by the outfile parameter.

Step 3: Locally test the script

Before uploading the script to the enterprise configuration backend, perform local testing to ensure that the script integrates with the Lingma plugin and filters risky completions or chat actions. Procedure:

  1. Copy the built JavaScript file to the /extension/local/script/ directory of the Lingma local storage path.

  2. Modify the configuration file config.json that is located in the /extension/local/ directory of the Lingma local storage path. Open the configuration file config.json, locate the contentHandlerScripts field, and add the configuration information of the script to the field. If the contentHandlerScripts field does not exist, add an array to the configurations. Sample code:

    {
      "contentHandlerScripts": [
        {
          "identifier": "SensitiveContentFilter", 
          "name": "Filter sensitive content",
          "version": "1.0.0",
          "scriptPath": "~/.lingma/extension/local/script/SensitiveContentFilter.js",
          "state": "enabled",
          "bizType": "completion"
        }
      ]
    }

    The following table describes the required parameters.

    Parameter

    Description

    identifier

    The unique script ID.

    name

    The script name.

    version

    The version number of the script. To change the script content, you must upgrade the version number. Otherwise, the script cannot take effect.

    scriptPath

    The path where the script is stored. Take note of the following items:

    • The script must be stored in the /extension/local/script/ directory of the local storage path.

    • The name of the JavaScript file, such as SensitiveContentFilter.js, must be consistent with the identifier value.

    state

    The script status. Valid values: enabled and disabled.

    bizType

    The business scenario of the script. Valid values: completion and chat.

Step 4: Upload the script

After completing local debugging and verification, upload your script.

  1. Log on to the Lingma console, choose Filter & Model > Filter, and then select the scenario where security filters need to be enabled.

  2. Select custom scripts as the required filter type.

  3. Upload the built JavaScript file.

  4. After uploading the file, click Save Configurations. The configurations take effect on the plugin side in about 5 minutes.

Custom script API

The following modes are available to process custom scripts:

  • Block: Subsequent process is blocked. Once blocked, no inference request is sent to the LLM, and the current request is interrupted.

  • Filter: Sent data is changed. For example, it is obfuscated, deleted, or replaced. Then the process continues.

  • Bypass: Sent data is returned without any changes. Then, the process continues.

API definitions
/**
 * Preprocessing API of AI Coding Assistant.
 */
export interface RequestPreHandler {
  // Process the current request.
  handle: (request: RawRequest, SDKTool: LingmaSDKTool) => Promise<HandlerResponse>;
}
Request parameter definitions
/**
 * Define the request object, including the action that triggers the current request and the raw data to be input to the LLM.
 */
export interface RawRequest {
  // The unique identifier of the current request, which can be used to track request execution.
  action: ActionEnum;
  // The action that trigger the current request.
  payload: ContentPayload;
  // The payload that encapsulates the raw data.
  requestId: string;
}

// The value types in ContentPayload.data.
export type PayloadDataValueType = string | number | string[];
/**
 * Encapsulate the raw data that is input to the LLM.
 */
export class ContentPayload {
  // The data set to be processed. You can view the corresponding key by querying the definitions of ContextValueKeyEnum.
  data: Map<PayloadDataKeyEnum, PayloadDataValueType>;
  // The context associated with data processing.
  associatedContexts: ContextItem[];

  constructor() {
    this.data = new Map<PayloadDataKeyEnum, PayloadDataValueType>();
    this.associatedContexts = [];
  }
}

/**
 * Define keys in ContentPayload.data.
 */
export enum PayloadDataKeyEnum {
  // The code snippet selected by the user.
  SELECTED_CODE ='lingma:code',
  // The text entered by the user.
  USER_INPUT = 'lingma:text',
  // The error message.
  ERROR_MESSAGES = 'lingma:error_messages',
  // The log information on the terminal.
  TERMINAL_CONTENT = 'lingma:terminal_content',
  // The preceding code snippet of the line where the current cursor is located when the code is completed.
  PREFIX_CODE = 'lingma:code_prefix',
  // The following code snippet of the line where the current cursor is located when the code is completed.
  SUFFIX_CODE = 'lingma:code_suffix',
  // The similar code snippet.
  SIMILAR_CODE = 'lingma:similar_code',
}

/**
 * Define actions that trigger requests.
 */
export enum ActionEnum {
    // Perform a unit test.
    GENERATE_TESTCASE           = 'GENERATE_TESTCASE',
    // Generate a comment.
    CODE_GENERATE_COMMENT       = 'CODE_GENERATE_COMMENT',
    // Interpret the code.
    EXPLAIN_CODE                = 'EXPLAIN_CODE',
    // Optimize the code.
    OPTIMIZE_CODE               = 'OPTIMIZE_CODE',
    // Start a chat. You can enter text in the chat input box.
    FREE_INPUT                  = 'FREE_INPUT',
    // Quickly fix a code issue.
    CODE_PROBLEM_SOLVE          = 'CODE_PROBLEM_SOLVE',
    // Generate an shell command.
    TERMINAL_COMMAND_GENERATION = 'TERMINAL_COMMAND_GENERATION',
    // Fix a terminal error.
    TERMINAL_EXPLAIN_FIX        = 'TERMINAL_EXPLAIN_FIX',
    // Complete the code.
    COMPLETION                  = 'COMPLETION',
}
Response parameter definitions
/**
 * Return model input filter results.
 */
export class HandlerResponse {
  // The processing policy that is used to control the subsequent processing logic.
  handlePolicy: HandlePolicy;
  // The reason.
  reason?: string;
  // When handlePolicy is set to FILTER, you must set this filtering property. The property value is the filtered data and must be consistent with the content of ContentRequest.payload.
  payload?: ContentPayload;
  constructor() {
    // The default value.
    // eslint-disable-next-line @typescript-eslint/no-use-before-define
    this.handlePolicy = HandlePolicy.NO_OPS;
    this.reason = '';
    this.payload = new ContentPayload();
  }
}

/**
 * Return the available processing policies.
 */
export enum HandlePolicy {
  // Block. This policy indicates that requests are directly blocked.
  BLOCK = 'BLOCK',
  // Filter. This policy indicates that requests are intercepted and the payload content is modified.
  FILTER = 'FILTER',
  // Bypass. This policy indicates that requests are not processed.
  NO_OPS = 'NO_OPS',
}

Configure the model output filter

Method 1: Regular expression (Rgex)

Note

During the expression configuration phase, administrators must fully test and validate regular expressions. Proper validation is crucial to avoid performance degradation or other potential issues before developers start using the Lingma plugin.

  • Processing Mode: Configure the filter by using regular expressions. Only the mode described in the following table is available.

    Bypass

    After a regular expression is matched, data is not processed.

  • Notifications: Enable the notification feature to receive alerts. Notifications can be pushed to a platform through a webhook.

  • Execution Order: The execution sequence is determined by the order specified in configurations.

  • Maximum Number of Regular Expressions: A maximum of 10 regular expressions can be added.

  • Regular Expression Standard: Regular expression configurations follow the ECMAScript standard and support common flags such as i (case-insensitive), g (globally matched), and s (DOTALL mode).

  • Sample regular expression configurations:

    Rule name

    Regular expression

    Original text

    ID card number

    (?<pre>.*)(\d{15})((\d{2})([0-9Xx]))(?<post>.*)

    ID card number: 330204197709022312.

    Email address

    \w+([-+.]\w+)*@\w+([-.]\w+)*\.\w+([-.]\w+)*

    My email address is lin***@aliyunmail.com.

    Password

    (.*password=)([\w\d]+)(.*)

    {password=1213213}

Method 2: Custom script

Enterprise Dedicated Edition allows you to configure the filter by customizing a script, enabling tailored model output filtering for complex scenarios. Procedure:

Step 1: Develop a script

Use the TypeScript language tocreate your script. Perform the following operations to create your script:

  1. Download a Template Repository: Obtain the template repository from lingma-extension-template. The template repository includes all necessary scaffolding required for script development. Be sure to carefully read theREADME.md file and the sample code.

  2. Implement the Model Input API. Your task is to implement an API named RequestPreHandler for model input filtering. Refer to Custom script API to understand how to define the API. The following code shows an example on how to implement the LLMChatAuditHandler.ts file.

    import {ResultUtil} from '../common/HandlerRespUtil';
    import { JsonUtil } from '../common/JsonUtil';
    import {PayloadUtil} from '../common/PayloadUtil';
    import {Config} from '../sdk/ConfigManager';
    import {LingmaSDKTool} from '../sdk/LingmaSDKTool';
    import axios from "axios";
    import moment from "moment";
    import os from "os";
    import {
      ActionEnum, AIResponse,
      HandlePolicy,
      RawRequest,
      RequestPostHandler,
      RequestPreHandler
    } from '../sdk/RequestHandleSDK';
    
    /**
     * Configure the custom script output filter. The filter implements the processing of the request content on a remote server, such as code scanning and content audit.
     */
    export const llmResultAuditHandler: RequestPostHandler = {
      handle: async (request: RawRequest, response: AIResponse,SDKTool: LingmaSDKTool) => {
         // The name of the operator.
         let userName = SDKTool.user.name;
         // The ID of the operator.
         let userId = SDKTool.user.uid;
         // ide
         let ide = SDKTool.idePlatform;
         // ide
         let ideVersion = SDKTool.ideVersion;     
         // The operation time.
         let operationTime = moment().format("YYYY-MM-DD HH:mm:ss");
         // The IP address of the operation.
         let opeartionIp = getIpAddress();
         // The business scenario of the operation. The code completion or chat scenario is available.
         let bizType = "chat";
         // The request ID of the operation.
         let requestId = request.requestId;
         // The operation that is performed.
         let action = request.action;
         // The operation content. To avoid failed reporting caused by the excessive size of reported operation content, we recommend that you select appropriate fields based on your audit requirements. The operation content cannot exceed 16 KB in size.
         let inferredResult = response.inferredResult.text;
        
         // Report the operation content to Simple Log Service (SLS).
         // The name of the project in SLS.
         let slsProject = "xxx";
         // The name of the Logstore in SLS.
         let slsLogStore = "xxx";
         // The endpoint of the region where the SLS instance resides.
         let endPoint = "cn-hangzhou.log.aliyuncs.com";
         let slsWebTrackingUrl = `http://${slsProject}.${endPoint}/logstores/${slsLogStore}/track?APIVersion=0.6.0&request_id=${requestId}&action=${action}&biz_type=${bizType}&user_name=${userName}&user_id=${userId}&ide=${ide}&ide_version=${ideVersion}&operation_time=${operationTime}&opeartion_ip=${opeartionIp}&inferredResult=${inferredResult}`;
         axios.get(slsWebTrackingUrl).catch((error) => {
           console.error(error);
         });
    
         // Return the filtering results.
        return ResultUtil.buildPostHandlerResponse(HandlePolicy.NO_OPS, response.inferredResult,'No operations are required.');
      },
    };
    
    /**
     * Add the custom script filter to the configurations.
     * @param config. It is used to manage configurations in a unified manner and is provided by LingmaExtensionSDK.
     */
    export function modifyConfig(config: Config) {
      config.postContentHandlers.push(llmResultAuditHandler);
      return config;
    }
    
    function getIpAddress() {
      const interfaces = os.networkInterfaces();
      for (let devName in interfaces) {
        let iface = interfaces[devName];
    
        for (let i = 0; i < iface.length; i++) {
          let alias = iface[i];
          if (
            alias.family === "IPv4" &&
            alias.address !== "127.0.0.1" &&
            !alias.internal
          )
            return alias.address;
        }
      }
    
      return "No IP address found";
    }
  3. Run and Test your Script: Execute the main method to run your debugging code and verify whether the script runs as expected. The following table describes the specific procedures that must be followed during testing.

    Step 1

    Edit the src/index.ts file, modify the main function, and adjust the debugged code. Sample code:

    async function main() {
      const value2 = 'hello world';
      const dataMap = new Map<PayloadDataKeyEnum, PayloadDataValueType>();
      dataMap.set(PayloadDataKeyEnum.USER_INPUT, value2);
    
      const request: RawRequest = {
        action: ActionEnum.CODE_GENERATE_COMMENT,
        payload: {
          associatedContexts: [],
          data: dataMap,
        },
        requestId: 'test-request-id',
      };
    
      const aiResponse: AIResponse = {
        inferredResult: {
          text: 'reply hello world',
        },
      };
    
      const response = await llmResultAuditHandler.handle(request, aiResponse, SDKTool);
      console.log(response);
    }

    Step 2

    Open the code file that you want to debug in VS Code and set a breakpoint, select the program startup icon from the debug view, and then click the Run button.

    image

Step 2: Compile and build the script

Compile a TypeScript file that has passed debugging into a JavaScript file. For example, you can compile the LLMChatAuditHandler.ts file into the LLMChatAuditHandler.js file. Procedure:

  1. Open the configuration file named src/build.js and then modify the entryPoints and outfile parameters. You can specify the path of the TypeScript file to be compiled and built in the entryPoints parameter and the output path of the corresponding JavaScript file in the outfile parameter.

  2. Run the node build.js command in the root directory of your code repository,. Once successful execution, the corresponding JavaScript file is generated at the path specified by the outfile parameter.

Step 3: Locally test the script

Before uploading the script to the enterprise configuration backend, perform local testing to ensure that the script correctly integrates with the Lingma plugin and properly filters risky completions or chat actions. Procedure:

  1. Copy the built JavaScript file to the /extension/local/script/ directory of the Lingma local storage path.

  2. Modify the configuration file config.json that is located in the /extension/local/ directory of theLingma local storage path. Open the configuration file config.json, locate the contentHandlerScripts field, and add the configuration information of the script to the field. If the contentHandlerScripts field does not exist, you may add an array to the configurations. Sample code:

    {
      "contentHandlerScripts": [
        {
          "identifier": "LLMChatAuditHandler", 
          "name": "AIChat audit",
          "version": "1.0.0",
          "scriptPath": "~/.lingma/extension/local/script/LLMChatAuditHandler.js",
          "state": "enabled",
          "stage":"post",
          "bizType": "completion"
        }
      ]
    }

    The following table describes the required parameters.

    Parameter

    Description

    identifier

    The unique script ID.

    name

    The script name.

    version

    The version number of the script. If you want to change the script content, you must upgrade the version number. Otherwise, the script cannot take effect.

    scriptPath

    The path where the script is stored. Take note of the following items:

    • The script must be stored in the /extension/local/script/ directory of the local storage path.

    • The name of the JavaScript file, such as LLMChatAuditHandler.js, must be consistent with the identifier value.

    state

    The script status. The value enabled indicates that the script is enabled and the value disabled indicates that the script is disabled.

    stage

    The processing stage of the script. The value post indicates model output filtering and the value pre indicates model input filtering. The default value of this parameter is pre.

    bizType

    The business scenario of the script. Valid values: completion and chat.

Step 4: Upload the script

After completing local debugging and verification, perform the following operations to upload your script:

  1. Log on to the Lingma console, choose Filter & Model > Filter, and then select the scenario where security filters need to be enabled.

  2. Select custom scripts as the required filter type.

  3. Upload the built JavaScript file.

  4. After uploading the file, click Save Configurations. The configurations take effect on the plugin side within approximately 5 minutes.

Custom script API

Only the following mode is available to process custom scripts:

  • Bypass: Sent data is returned without any changes, and then the subsequent process continues.

API definitions
/**
 * Output filtering API of AI Coding Assistant.
 * @param request The current request sent by the user.
 * @param response Inference content returned by the LLM.
 * @param SDKTool SDK tool class. You can use this tool class to obtain information about IDE and plugins.
 * @returns The returned results that are processed by the model output filter.
 */
export interface RequestPostHandler {
  // The model output filtering method.
  handle: (request: RawRequest, response: AIResponse, SDKTool: LingmaSDKTool) => Promise<PostHandlerResponse>;
}
Request parameter definitions
/**
 * Define the request object, including the action that triggers the current request and the raw data to be input to the LLM.
 */
export interface RawRequest {
  // The unique identifier of the current request, which can be used to track request execution.
  action: ActionEnum;
  // The action that trigger the current request.
  payload: ContentPayload;
  // The payload that encapsulates the raw data.
  requestId: string;
}

/**
 * Define the results that are generated by model inference.
 */
export class InferredResult {
  // The text content that is generated by the LLM.
  text: string;
  constructor() {
    this.text = '';
  }
}

// The value types in ContentPayload.data.
export type PayloadDataValueType = string | number | string[];
/**
 * Encapsulate the raw data that is input to the LLM.
 */
export class ContentPayload {
  // The data set to be processed. You can view the corresponding key by querying the definitions of ContextValueKeyEnum.
  data: Map<PayloadDataKeyEnum, PayloadDataValueType>;
  // The context associated with data processing.
  associatedContexts: ContextItem[];

  constructor() {
    this.data = new Map<PayloadDataKeyEnum, PayloadDataValueType>();
    this.associatedContexts = [];
  }
}

/**
 * Define keys in ContentPayload.data.
 */
export enum PayloadDataKeyEnum {
  // The code snippet selected by the user.
  SELECTED_CODE ='lingma:code',
  // The text entered by the user.
  USER_INPUT = 'lingma:text',
  // The error message.
  ERROR_MESSAGES = 'lingma:error_messages',
  // The log information on the terminal.
  TERMINAL_CONTENT = 'lingma:terminal_content',
  // The preceding code snippet of the line where the current cursor is located when the code is completed.
  PREFIX_CODE = 'lingma:code_prefix',
  // The following code snippet of the line where the current cursor is located when the code is completed.
  SUFFIX_CODE = 'lingma:code_suffix',
  // The similar code snippet.
  SIMILAR_CODE = 'lingma:similar_code',
  // The file path of the completed code.
  FILE_PATH = 'lingma:file_path',
}

/**
 * Define actions that trigger requests.
 */
export enum ActionEnum {
    // Perform a unit test.
    GENERATE_TESTCASE           = 'GENERATE_TESTCASE',
    // Generate a comment.
    CODE_GENERATE_COMMENT       = 'CODE_GENERATE_COMMENT',
    // Interpret the code.
    EXPLAIN_CODE                = 'EXPLAIN_CODE',
    // Optimize the code.
    OPTIMIZE_CODE               = 'OPTIMIZE_CODE',
    // Start a chat. You can enter text in the chat input box.
    FREE_INPUT                  = 'FREE_INPUT',
    // Quickly fix a code issue.
    CODE_PROBLEM_SOLVE          = 'CODE_PROBLEM_SOLVE',
    // Generate an shell command.
    TERMINAL_COMMAND_GENERATION = 'TERMINAL_COMMAND_GENERATION',
    // Fix a terminal error.
    TERMINAL_EXPLAIN_FIX        = 'TERMINAL_EXPLAIN_FIX',
    // Complete the code.
    COMPLETION                  = 'COMPLETION',
}
Response parameter definitions
/**
 * Return the model output filtering results.
 */
export class PostHandlerResponse {
  // The processing policy that is used to control the subsequent processing logic.
  handlePolicy: HandlePolicy;
  // The reason.
  reason?: string;
  // The returned results that are processed by the model output filter.
  processedResult: InferredResult;
  constructor() {
    // The default value.
    this.handlePolicy = HandlePolicy.NO_OPS;
    this.reason = '';
    this.processedResult = new InferredResult();
  }
}

/**
 * Encapsulate the returned LLM results.
 */
export class AIResponse {
  // The model inference results.
  inferredResult: InferredResult;
  constructor() {
    this.inferredResult = new InferredResult();
  }
}

/**
 * Define the results that are generated by model inference.
 */
export class InferredResult {
  // The text content that is generated by the LLM.
  text: string;
  constructor() {
    this.text = '';
  }
}

/**
 * Encapsulate the available processing policies. Only NO_OPS is supported for the model output filter.
 */
export enum HandlePolicy {
  // Block. This policy indicates that requests are directly blocked.
  BLOCK = 'BLOCK',
  // Filter. This policy indicates that requests are intercepted and the payload content is modified.
  FILTER = 'FILTER',
  // Bypass. This policy indicates that requests are not processed.
  NO_OPS = 'NO_OPS',
}

Configure the knowledge base filter (applicable only to Enterprise Dedicated Edition)

Enterprise Dedicated Edition allows the Lingma admin and the global admin to configure the knowledge base filter on the Filter page. Once a filter is configured, admins can review knowledge base files before uploading them.

Procedure

Step 1: Enable and edit the knowledge base filter

  1. In the left-side navigation pane, choose . On the page that appears, click the Knowledge Base Upload tab.

  2. On the page that appears, turn on Enable Knowledge Base Filter. Then, configure the parameters described in the following table.

    URL

    Required

    The address of the interface to access the third-party scanning service provided by the enterprise. POST requests are required for the interface.

    Token Header

    Required

    The field name for the token in request header.

    Secret Key

    Required

    The key required to access the token. The token is added to the specified request header field to verify the validity of the request. For more information, see the "Security token" section of this topic.

    image

Step 2: Test filter connectivity

  1. After completing the configuration, perform a connectivity test by clicking the Test button. If the third-party filtering interface returns a status code of 2xx, the test is successful.image

  2. If another status code is returned, the test failed. Check the entered information before reproducing the test.

Step 3: Save the configurations of the knowledge base filter

Click the Save Configurations to save your filter configurations. Once saved, the filter immediately takes effect.

API reference for third-party scanning service

To ensure that the knowledge base filter works with specified configurations, an enterprise must provide a third-party scanning service for content scanning before uploading approved content. The scanning service interface must meet the following design requirements:

Request headers

Header

Required

Description

Example

X-Auth-Raw

Yes

The authentication parameter of the interface. The header can be used as the token field name that you specify on the filter configuration page. The header value must be the final secret key generated after the encryption algorithm is used. For more information, see the "Security token" section of this topic.

6c3baa76c62550eab864e6f75c4bb

Content-Type

Yes

The content type in the request and response.

multipart/form-data

  • Security Token: A security token serves as a security signature that is designed by Alibaba Cloud to prevent unauthorized access to cloud services. To generate a token, you need a secret key, the current timestamp, an encryption algorithm, and additional information.

  • Token Generation: When theLingma system calls the third-party scanning service interface, a security token must be included in the current request header for authentication. The token value is calculated based on the following parameters:

    token = sha256Hex(method + url + timestamp + tokenSecret) + timestampHex

    method

    The request method. The Post request method is used.

    url

    The URL of the scanning service interface that you enter when configuring the knowledge base filter.

    timestamp

    The current time.

    tokenSecret

    The secret key that you enter when configuring the knowledge base filter.

    timestampHex

    The timestamp in the hexadecimal format.

  • Token Verification: Your third-party scanning service must verify the token of a request to ensure its validity. Refer to the following sample code to do this.

Important
  • Timestamp: The client and server times must be synchronized to avoid failed token verifications due to time discrepancies.

  • Key Management: Safeguard the token specified by the tokenSecret parameter. Do not disclose it to unauthorized users.

  • Expiration Time: Adjust the validity period of the token based on your business requirements. In this example, the validity period of the token is set to 60 seconds.

/*
 * Parameters:
 * receivedHash: the received hash value that contains the timestamp information. 
 * tokenSecret: the key that is used to generate the hash. 
 * url: the URL of the current request. 
 */
public boolean validateAuthRaw(String receivedHash, String tokenSecret, String url) {
    final String method = "POST";
    
    // Extract the timestamp from the receivedHash value.
    String tsHex = receivedHash.substring(receivedHash.length() - 8);
    long tsSec = Long.parseLong(tsHex, 16);
    
    // Calculate the difference between the current time and the receiving time. Assume that the allowed maximum time difference is 60 seconds.
    long now = System.currentTimeMillis() / 1000L;
    if (Math.abs(now - tsSec) > 60) {
        return false; // The timeout period.
    }
    
    // Construct the string to be signed.
    String plain = method + url + tsSec + tokenSecret;
    
    // Generate the expected hash value.
    String expectedHash = org.apache.commons.codec.digest.DigestUtils.sha256Hex(plain);
    
    // Compare the received hash value with the expected hash value.
    return expectedHash.equals(receivedHash.substring(0, receivedHash.length() - 8));
}

Request parameters

Parameter

Type

Required

Description

Example

metadata

string

Yes

The business metadata that contains the user and queryID field (Content-Type: application/json).

{"user": "user0000001",

"queryId": "cd2fd109-c4d4-489f-9b27-53752f7827d6"}

file

file

Yes

The inspected file.

Sample request

Content-Type: multipart/form-data; boundary=${bound}
--${bound}

Content-Disposition: form-data; name="metadata"

Content-Type: application/json
{
"user":"user0000001",
"queryID":"cd2fd109-c4d4-489f-9b27-53752f7827d6"
}

--${bound}
Content-Disposition: form-data; name="file"; filename="test-file.pdf"

Content-Type: application/pdf

%binary-file-content-here%

Response parameters

The interface must return the HTTP status code 200 and contain the response parameters described in the following table.

Parameter

Type

Required

Description

Example

forbidden

boolean

Yes

The security inspect result. The value "true" indicates that the security inspect failed.

false

errorMsg

string

No

The error message that indicates the reason of the failed security inspect.

"The file contains malicious content. Please modify and upload it again."

queryId

string

No

The request ID, also shown as the value of the queryID field in the request metadata.

"cd2fd109-c4d4-489f-9b27-53752f7827d6"

user

string

No

The user ID, also shown as the value of the user field in the request metadata.

"user0001"