This topic describes how to use Content Moderation SDK for Go to moderate text for spam such as pornographic and terrorist content.
Background information
Only synchronous moderation is supported for the text anti-spam operation. For more information about parameters, see Synchronous text moderation.
You can send a request to moderate one or more text entries. You are charged based on the number of text entries that are moderated. For more information, see Overview.
Prerequisites
Moderate text for spam
Text anti-spam allows you to add custom terms, such as the brand terms of competitors. If the moderated text contains the terms you add, block is returned for the suggestion parameter.
You can add terms in the Alibaba Cloud Content Moderation console or by calling an API operation.
Operation | Description | Supported region |
---|---|---|
TextScanRequest | Submits text moderation tasks. Set the scenes parameter to antispam. |
|
package main
import (
"encoding/json"
"fmt"
"github.com/aliyun/alibaba-cloud-sdk-go/services/green"
"strconv"
)
func main() {
// Use the AccessKey ID and AccessKey secret of your Alibaba Cloud account.
client, _err := green.NewClientWithAccessKey("cn-shanghai", "Your AccessKey ID", "Your AccessKey secret")
if _err != nil {
fmt.Println(_err.Error())
return
}
task := map[string]interface{}{"content": "The content of the text to be moderated"}
// scenes: the moderation scenario. Set the value to antispam.
content, _ := json.Marshal(
map[string]interface{}{
"scenes": [...]string{"antispam"},
"tasks": [...]map[string]interface{}{task},
},
)
textScanRequest := green.CreateTextScanRequest()
textScanRequest.SetContent(content)
textScanResponse, err := client.TextScan(textScanRequest)
if err != nil {
fmt.Println(err.Error())
return
}
if textScanResponse.GetHttpStatus() != 200 {
fmt.Println("response not success. status:" + strconv.Itoa(textScanResponse.GetHttpStatus()))
}
fmt.Println(textScanResponse.GetHttpContentString())
}
Provide feedback on text anti-spam results
If a text anti-spam result does not meet your expectations, you can use TextFeedbackRequest to provide feedback on the machine-assisted moderation result.
The server corrects the text anti-spam result based on your feedback and adds the feedback to a text library. When you submit text that meets the text pattern next time, the system returns the text anti-spam result that is corrected based on your feedback.
Operation | Description | Supported region |
---|---|---|
TextFeedbackRequest | Provides feedback on a text anti-spam result to correct the machine-assisted moderation result that does not meet your expectations. |
|
package main
import (
"encoding/json"
"fmt"
"github.com/aliyun/alibaba-cloud-sdk-go/services/green"
"strconv"
)
func main() {
// Use the AccessKey ID and AccessKey secret of your Alibaba Cloud account.
client, _err := green.NewClientWithAccessKey("cn-shanghai", "Your AccessKey ID", "Your AccessKey secret")
if _err != nil {
fmt.Println(_err.Error())
return
}
// label: The expected category of moderation results for the moderated text in the specified moderation scenario.
content, _ := json.Marshal(
map[string]interface{}{
"taskId": "ID of the text moderation task", "content": "Text content", "label": "spam",
},
)
request := green.CreateTextFeedbackRequest()
request.SetContent(content)
response, err := client.TextFeedback(request)
if err != nil {
fmt.Println(err.Error())
return
}
if response.GetHttpStatus() != 200 {
fmt.Println("response not success. status:" + strconv.Itoa(response.GetHttpStatus()))
}
fmt.Println(response.GetHttpContentString())
}