title: ai-aws-content-moderation keywords:

  • Apache APISIX
  • API Gateway
  • Plugin
  • ai-aliyun-content-moderation description: This document contains information about the Apache APISIX ai-aws-content-moderation Plugin.

Description

The ai-aliyun-content-moderation plugin integrates with Aliyun's content moderation service to check both request and response content for inappropriate material when working with LLMs. It supports both real-time streaming checks and final packet moderation.

This plugin must be used in routes that utilize the ai-proxy or ai-proxy-multi plugins.

Plugin Attributes

FieldRequiredTypeDescription
endpointYesStringAliyun service endpoint URL
region_idYesStringAliyun region identifier
access_key_idYesStringAliyun access key ID
access_key_secretYesStringAliyun access key secret
check_requestNoBooleanEnable request content moderation. Default: true
check_responseNoBooleanEnable response content moderation. Default: false
stream_check_modeNoStringStreaming moderation mode. Default: "final_packet". Valid values: ["realtime", "final_packet"]
stream_check_cache_sizeNoIntegerMax characters per moderation batch in realtime mode. Default: 128. Must be >= 1.
stream_check_intervalNoNumberSeconds between batch checks in realtime mode. Default: 3. Must be >= 0.1.
request_check_serviceNoStringAliyun service for request moderation. Default: "llm_query_moderation"
request_check_length_limitNoNumberMax characters per request moderation chunk. Default: 2000.
response_check_serviceNoStringAliyun service for response moderation. Default: "llm_response_moderation"
response_check_length_limitNoNumberMax characters per response moderation chunk. Default: 5000.
risk_level_barNoStringThreshold for content rejection. Default: "high". Valid values: ["none", "low", "medium", "high", "max"]
deny_codeNoNumberHTTP status code for rejected content. Default: 200.
deny_messageNoStringCustom message for rejected content. Default: -.
timeoutNoIntegerRequest timeout in milliseconds. Default: 10000. Must be >= 1.
ssl_verifyNoBooleanEnable SSL certificate verification. Default: true.

Example usage

First initialise these shell variables:

ADMIN_API_KEY=edd1c9f034335f136f87ad84b625c8f1
ALIYUN_ACCESS_KEY_ID=your-aliyun-access-key-id
ALIYUN_ACCESS_KEY_SECRET=your-aliyun-access-key-secret
ALIYUN_REGION=cn-hangzhou
ALIYUN_ENDPOINT=https://green.cn-hangzhou.aliyuncs.com
OPENAI_KEY=your-openai-api-key

Create a route with the ai-aliyun-content-moderation and ai-proxy plugin like so:

curl "http://127.0.0.1:9180/apisix/admin/routes/1" -X PUT \
  -H "X-API-KEY: ${ADMIN_API_KEY}" \
  -d '{
    "uri": "/v1/chat/completions",
    "plugins": {
      "ai-proxy": {
        "provider": "openai",
        "auth": {
          "header": {
            "Authorization": "Bearer '"$OPENAI_KEY"'"
          }
        },
        "override": {
          "endpoint": "http://localhost:6724/v1/chat/completions"
        }
      },
      "ai-aliyun-content-moderation": {
        "endpoint": "'"$ALIYUN_ENDPOINT"'",
        "region_id": "'"$ALIYUN_REGION"'",
        "access_key_id": "'"$ALIYUN_ACCESS_KEY_ID"'",
        "access_key_secret": "'"$ALIYUN_ACCESS_KEY_SECRET"'",
        "risk_level_bar": "high",
        "check_request": true,
        "check_response": true,
        "deny_code": 400,
        "deny_message": "Your request violates content policy"
      }
    }
  }'

The ai-proxy plugin is used here as it simplifies access to LLMs. However, you may configure the LLM in the upstream configuration as well.

Now send a request:

curl http://127.0.0.1:9080/v1/chat/completions -i \
  -H "Content-Type: application/json" \
  -d '{
    "model": "gpt-3.5-turbo",
    "messages": [
      {"role": "user", "content": "I want to kill you"}
    ],
    "stream": false
  }'

Then the request will be blocked with error like this:

HTTP/1.1 400 Bad Request
Content-Type: application/json

{"id":"chatcmpl-123","object":"chat.completion","model":"gpt-3.5-turbo","choices":[{"index":0,"message":{"role":"assistant","content":"Your request violates content policy"},"finish_reason":"stop"}],"usage":{"prompt_tokens":0,"completion_tokens":0,"total_tokens":0}}