All Products
Search
Document Center

Artificial Intelligence Recommendation:A/B Service Integration

Last Updated:Dec 16, 2024

In recommendation scenarios, users often need to adjust recall policies, ranking models, and model parameters to test new ideas. PAI-Rec has developed a lightweight A/B testing service aimed at minimizing interference with existing systems and supporting rapid experimentation. The service provides server-side experimental features.

Introduction to A/B Service

The A/B service mainly includes the following components:

  • AB Web Console: Acts as the backend management system for the A/B service, used for experiment configuration. Data is persisted to MySQL.

  • AB Server: Provides HTTP API services and is deployed in PAI-EAS. This service reads data from MySQL and requires EAS to directly access the MySQL database to ensure network connectivity. For more information, see Configure network connectivity.

  • AB SDK: Needs to be integrated into the server-side program, supporting experiment configuration and traffic allocation policies. The SDK enables request shunting and experiment matching, and further actions can be taken based on the returned results. The current version supports Go, Python, and Java languages.

image

Integrate A/B Service Experimental Features

Operation Principle

During retrieval runtime, you must check if there are relevant experimental parameters firstly. If they exist, use the existing retrieval instance to reflectively call the CloneWithConfig method and pass in the experimental parameters to generate a new retrieval instance. The newly generated retrieval instance will be registered in the system. In subsequent calls, the system will directly return the registered instance to avoid repeated creation.

Procedure

  1. Set Environment

    The environment value can be set to daily, prepub, or product. There are two methods to set it:

    • Configure RunMode in config.json.

    • Set the environment variable PAIREC_ENVIRONMENT, which takes precedence over the settings in config.json.

  2. Configure Experimental Parameters

    PAI-Rec predefines some experimental parameters. When setting these parameters, you must strictly follow the naming convention. Otherwise, even if the experiment is matched, the corresponding parameters cannot be found.

    Parameter

    Type

    Description

    Example

    Category+".RecallNames"

    json array

    List of recalls, needs to include all recalls.

    "default.RecallNames":[ "HomepageEtrecRecall", "HomepageDssmRecall"]

    "recall."+specific recall name

    json object

    Create a new recall based on recall config.

    {"recall.MyRecall":{"version":"v2"}}

    filterNames

    json array

    Filter list, includes all filter flows.

    {"filterNames":["UniqueFilter", "UserExposureFilter"]}

    rankconf

    recconf.RankConfig

    Configuration of the ranking algorithm.

    "rankconf":{"RankAlgoList":["pai_homepage_fm"],"RankScore":"${pai_homepage_fm}"}

    features.scene.name

    string

    Name of the scenario corresponding to Features load.

    "homepage"

    user_features.scene.name

    string

    Name of the scenario expected to correspond to User Features. For more information, see Prefetch user features.

    "Home_feed"

    Category+".SortNames"

    json array

    List of sorting modules, needs to include all sorting module names.

    "default.SortNames": [

    "RetargetDistinctSort",

    "RetargetSort",

    "TagWeightSort",

    "PositionReviseSort",

    "DiversitySortHead" ]

    "sort."+specific sort name

    json object

    Create a new sort based on sort config.

    {

    "sort.RetargetSortV": {

    "Debug": false,

    "BoostScoreConditions": [

    {

    "Conditions": [

    {

    "Name": "recall_name",

    "Domain": "item",

    "Type": "string",

    "Value": "retarget_u2i",

    "Operator": "equal"

    }

    ],

    "Expression": "score * 1.0"

    }

    ]

    }

    }

    generalRankConf

    recconf.GeneralRankConfig

    Configuration of coarse ranking, including user feature acquisition and algorithm configuration RankConf. For specific operations, see Configure coarse ranking.

    {"generalRankConf":{"FeatureLoadConfs":[{"FeatureDaoConf":{}}],"RankConf":{},"ActionConfs":[]}}

    coldStartGeneralRankConf

    recconf.ColdStartGeneralRankConfig

    Cold start coarse ranking configuration. For specific operations, see Configure coarse ranking.

    {"coldStartGeneralRankConf":{"FeatureLoadConfs":[{"FeatureDaoConf":{}}],"RankConf":{},"ActionConfs":[]}}

    coldStartRankConf

    recconf.ColdStartRankConfig

    Cold start recall, rank stage configuration, specify rank algorithm.

    {"coldStartRankConf":{"RecallName":"ColdStartRecall", "AlgoName":"linucb"}}

  3. Match Experiment

    Each request needs to match the experiment and construct the context. The following sample code provides an example:

    	func (c *HomeFeedController) makeRecommendContext() {
    	c.context = context.NewRecommendContext()
    	c.context.Size = c.param.Limit
    	c.context.Param = &c.param
    	c.context.RecommendId = c.RequestId
    	c.context.Config = recconf.Config
    	c.context.Debug = c.param.Debug
    	abcontext := model.ExperimentContext{
    		Uid:         c.param.DistinctId,
    		RequestId:   c.RequestId,
    		FilterParams: map[string]interface{}{},
    	}
    
    	if abtest.GetExperimentClient() != nil {
    		c.context.ExperimentResult = abtest.GetExperimentClient().MatchExperiment(c.param.SceneId, &abcontext)
    		log.Info(c.context.ExperimentResult.Info())
    	}
    }
  4. Adjust Experimental Parameters

    The RecommendContext object can obtain specific experimental parameters through context.ExperimentResult. Use GetLayerParams to get experimental parameters on a certain layer. It supports Get, GetInt, GetFloat, and GetInt64 methods. The first parameter is the parameter name, and the second parameter is the default value, which is returned when the specified parameter is not found.

    	count := r.recallCount
    	if context.ExperimentResult != nil {
    		count = context.ExperimentResult.GetLayerParams("").GetInt("recall.base.count", count)
    	}
    	fmt.Println("test count", count)
  5. Conduct Experiments on Recalls

    When conducting experiments on specific recalls, you can dynamically adjust the recall configuration through different parameters. The following rules must be followed:

    • The parameter name format is: "recall."+existing recall name.

    • The parameter configuration should be in JSON mapping form.

    • The relevant recall instance must implement the CloneWithConfig method to generate a new recall instance based on the given parameters. For example:

      type MyRecall struct {
      	version string
      }
      
      func NewMyRecall() *MyRecall {
      	r := MyRecall{version: "v1"}
      
      	return &r
      }
      func (m *MyRecall) CloneWithConfig(params map[string]interface{}) *MyRecall {
      
      	r := MyRecall{}
      	if _, ok := params["version"]; ok {
      		r.version = params["version"].(string)
      	}
      	return &r
      }

Integrate A/B Service Parameter Features

To use the features, perform the following steps:

// Obtain the specific scenario name
scene := context.GetParameter("scene").(string)

// Get the parameter list based on the scenario, then use the specific Get* function to get the specific parameter value
count := abtest.GetParams(scene).GetInt("count", 100)
fmt.Println("recall count:", count)