All Products
Search
Document Center

Platform For AI:Use Stable Diffusion web UI to deploy an AI painting service

Last Updated:Jul 25, 2024

Stable Diffusion is a stable and easy-to-use image generation model that can generate high-quality and high-resolution images. Open source Stable Diffusion WebUI is a browser interface for Stable Diffusion models developed based on the Gradio library and provides various image generation tools. Elastic Algorithm Service (EAS) of Platform for AI (PAI) provides a scenario-based deployment mode that allows you to deploy a model service based on Stable Diffusion WebUI by configuring several parameters. You can use the service that you deployed to perform model inference to generate images based on text prompts. This topic describes how to deploy and call the Stable Diffusion WebUI service. This topic also provides answers to some frequently asked questions about the service.

Background information

Features and Benefits

The deployment in EAS provides the following features and benefits:

  • Ease of use: You can quickly deploy a ready-to-use model service in EAS. EAS also allows you to dynamically switch underlying resources, such as GPUs based on your business requirements.

  • Enterprise-level features: The individual mechanisms for frontend and backend services support multiple users for scheduling multi-GPUs, user isolation, and bill splitting.

  • Plug-ins and optimization: PAI provides the open source edition of Stable Diffusion WebUI, the PAI-Blade tool, and the FileBrower and ModelZoo plugins. The PAI-Blade tool is used for performance optimization. The FileBrowser plug-in is used to upload models and images from or download models and images to on-premises clients.The self-developed ModelZoo plug-in is used to accelerate downloading of open source models.

Editions

The Stable Diffusion WebUI service provides the following editions:

  • Standard Edition: suitable for a single user who wants to call the service by using the web UI or by calling the API of the service that is deployed on a single instance.

  • Cluster Edition WebUI: suitable for group work scenarios in which multiple members use the web UI to generate images. The system creates an instance to allow the proxy to process requests from the web UI in addition to the inference service instances. Cluster Edition has the following benefits:

    • The system assigns a separate working directory for each user to facilitate storage and management of files, such as models and generated images or videos.

    • The system provides a separate backend environment for each user. If you configure multiple instances for the service, tasks of a single user run in sequence and tasks of multiple users are distributed among instances to implement efficient GPU sharing.

  • API Edition: suitable for high-concurrency scenarios. The system automatically deploys the service as an asynchronous service. The system creates a queue service instance in addition to the asynchronous inference service instances.

  • Serverless Edition: is available only in the China (Shanghai) and China (Hangzhou) regions. The deployment of a Serverless Edition service is provided free of charge. You are billed based on the duration of service calls. The system automatically scales the service based on the service requests. You can call the Serverless Edition service only by using the web UI.

Procedure

  1. Deploy a model service

    Select a deployment method based on your business scenario.

  2. Call the model service

    Select one of the following methods based on the edition you use:

    • Call the service by using the web UI

      Use the web UI to send a service request. This method is supported for Standard Edition, Cluster Edition and Serverless Edition.

    • Call the service by using API operations

      Send a service request by calling API operations. This method is supported only for Standard Edition and API Edition. Standard Edition supports synchronous service calls. API Edition supports asynchronous service calls.

  3. Install plug-ins for enhanced features

    PAI provides various plug-ins to provide diverse features. You can view and install the plug-in on the Extensions tab of the web UI page.

Prerequisites

  • An Apsara File Storage NAS (NAS) file system or Object Storage Service (OSS) bucket is created to store the model files and generated images.

  • If you call a service by using the API operations, you must configure environment variables. For more information, see Configure access credentials.

Deploy a model service

The following deployment methods are supported:

Method 1: Scenario-based model deployment (recommended)

This method is applicable to Standard Edition, API Edition, Cluster Edition, and Serverless Edition. Perform the following steps:

  1. Go to the Elastic Algorithm Service (EAS) page.

    1. Log on to the PAI console.

    2. In the left-side navigation pane, click Workspaces. On the Workspaces page, click the name of the workspace to which you want to deploy the model.

    3. In the left-side navigation pane, choose Model Deployment>Elastic Algorithm Service (EAS) to go to the Elastic Algorithm Service (EAS) page. image

  2. On the Elastic Algorithm Service (EAS) page, click Deploy Service. In the Scenario-based Model Deployment section, click AI Painting - SD Web UI Deployment.

  3. On the AI Painting - SD Web UI Deployment page, configure the following parameters.

    Parameter

    Description

    Basic Information

    Edition

    The edition of the service. Valid values:

    • Standard Edition: suitable for a single user who wants to call the service by using the web UI or by calling the API of the service that is deployed on a single instance.

    • API Edition: suitable for high-concurrency scenarios. The system automatically deploys the service as an asynchronous service. This edition supports service calls only by using API operations.

    • Cluster Edition WebUI: suitable for multiple users to call the service by using the web UI at the same time. This edition supports service calls only by using the web UI. If you want to enable multiple users to use the same Stable Diffusion web UI to generate images, you can select the Cluster Edition WebUI.

    • Serverless Edition: The service deployment is free of charge. You are billed based on the duration of service calls. The system automatically scales the service based on your service requests. You can call the Serverless Edition service only by using the web UI.

    For information about the scenarios of each edition, see the Background information section of this topic.

    Model Settings

    Click Add to configure storage for the model that is used to store model files and generated images. Valid values:

    • Mount OSS: Select an existing OSS bucket. Compared with NAS, OSS is more convenient for data upload and download and can generate an Internet access address for generated images. However, the speed of switching models and saving images is slower.

    • By NAS: NAS supports faster model switching and image saving. This option is supported only for the following editions: Standard Edition, API Edition, and Cluster Edition WebUI.

      • NAS Mount Target: Select an existing NAS file system and mount target.

      • NAS Source Path: Set the value to /.

    In this example, OSS is used.

    Resource Configuration

    Resource Configuration

    This parameter is supported only for the following editions: Standard Edition, API Edition, and Cluster Edition WebUI.

    Select GPU. On the Instance Type list that appears, select an instance type. We recommend that you select ml.gu7i.c16m60.1-gu30 for cost efficiency.

    Inference Acceleration

    The inference acceleration feature. Valid values:

    • PAI-Blade: PAI-Blade is a general-purpose inference optimization tool provided by PAI. You can use PAI-Blade to optimize a trained model for optimal inference performance.

    • xFormers: xFormers is an open source acceleration tool based on Transformer that can effectively accelerate image generation.

    • Not Accelerated: The inference acceleration feature is disabled.

    VPC Configuration (Optional)

    VPC

    If you set Model Settings to Mount NAS, the system automatically selects the virtual private cloud (VPC) that is connected to the VPC in which the NAS file system resides.

    vSwitch

    Security Group Name

  4. After you configure the parameters, click Deploy.

Method 2: Custom deployment

This method is applicable to the following editions: Standard Edition, API Edition, and Cluster Edition WebUI. Perform the following steps:

  1. Go to the Elastic Algorithm Service (EAS) page.

    1. Log on to the PAI console.

    2. In the left-side navigation pane, click Workspaces. On the Workspaces page, click the name of the workspace to which you want to deploy the model.

    3. In the left-side navigation pane, choose Model Deployment>Elastic Algorithm Service (EAS) to go to the Elastic Algorithm Service (EAS) page. image

  2. On the Model Online Service (EAS) page, click Deploy Service. In the Custom Model Deployment section, click Custom Deploy.

  3. On the Create Service page, configure the following parameters.

    Parameter

    Description

    Model Service Information

    Deployment Mode

    • To deploy a Standard Edition or Cluster Edition WebUI service, select Deploy Web App by Using Image.

    • To deploy an API Edition service, select Deploy Service by Using Image and turn on Asynchronous Inference Services.

    Select Image

    Select PAI Image and then select stable-diffusion-webui and the latest version from the drop-down lists.

    • x.x-standard: Standard Edition

    • x.x.x-api: API Edition

    • x.x.x-cluster-webui:Cluster Edition WebUI

    Note
    • The image version is frequently updated. We recommend that you select the latest version.

    • If you want to allow multiple users to use a Stable Diffusion WebUI to generate images, select the x.x.x-cluster-webui version.

    • For information about the scenarios of each edition, see the Background information section of this topic.

    Specify Model Settings

    Click Specify Model Settings to configure the location where you want to store the model file and the image that is generated by model inference. Valid values:

    • Mount OSS Path

      • Select an OSS path of an existing bucket.

      • Mount Path: Set the value to /code/stable-diffusion-webui/data.

    • Mount NAS File System

      • NAS Mount Target: Select an existing NAS file system and mount target.

      • NAS Source Path: Set the value to /.

      • Mount Path: Set the value to /code/stable-diffusion-webui/data.

    • Mount PAI Model

      • Select a PAI model and a model version.

      • Mount Path: Set the value to /code/stable-diffusion-webui/data.

    In this example, an OSS bucket is mounted.

    Command to Run

    After you configure the image version, the system automatically specifies a command.

    • After you configure the model settings, append the --data-dir /code/stable-diffusion-webui/data parameter to the command to mount data to the specific path.

    • Optional. You can also append the --blade or --xformers parameter to the command to enable inference acceleration. For more information, see the "What parameters can I configure when I create a service?" section of this topic.

    Resource Deployment Information

    Resource Configuration

    Select GPU. On the Instance Type list that appears, select an instance type. We recommend that you select ml.gu7i.c16m60.1-gu30 for cost efficiency.

    VPC Settings

    VPC

    If you set Model Settings to Mount NAS, the system automatically selects the VPC that is connected to the VPC in which the NAS file system resides.

    vSwitch

    Security Group Name

  4. After you configure the parameters, click Deploy.

Method 3: JSON deployment

You can deploy a Stable Diffusion WebUI service by using a JSON script. The following section describes how to use JSON to deploy Standard Edition and API Edition services.

  1. Go to the EAS-Online Model Services page.

    1. Log on to the PAI console.

    2. In the left-side navigation pane, click Workspaces. On the Workspaces page, click the name of the workspace to which you want to deploy the model.

    3. In the left-side navigation pane, choose Model Deployment>Elastic Algorithm Service (EAS) to go to the Elastic Algorithm Service (EAS) page. image

  2. On the Elastic Algorithm Service (EAS) page, click Deploy Service. In the Configuration Editor section, click JSON Deployment.

  3. On the JSON Deployment page, configure the following content in the JSON format:

    Standard Edition

    {
        "metadata": {
            "instance": 1,
            "name": "sd_v32",
            "enable_webservice": true
        },
        "containers": [
            {
                "image": "eas-registry-vpc.<region>.cr.aliyuncs.com/pai-eas/stable-diffusion-webui:4.2",
                "script": "./webui.sh --listen --port 8000 --skip-version-check --no-hashing --no-download-sd-model --skip-prepare-environment --api --filebrowser --data-dir=/code/stable-diffusion-webui/data",
                "port": 8000
            }
        ],
        "cloud": {
            "computing": {
                "instance_type": "ml.gu7i.c16m60.1-gu30",
                "instances": null
            },
            "networking": {
                "vpc_id": "vpc-t4nmd6nebhlwwexk2****",
                "vswitch_id": "vsw-t4nfue2s10q2i0ae3****",
                "security_group_id": "sg-t4n85ksesuiq3wez****"
            }
        },
        "storage": [
            {
                "oss": {
                    "path": "oss://examplebucket/data-oss",
                    "readOnly": false
                },
                "properties": {
                    "resource_type": "model"
                },
                "mount_path": "/code/stable-diffusion-webui/data"
            },
            {
                "nfs": {
                    "path": "/",
                    "server": "726434****-aws0.ap-southeast-1.nas.aliyuncs.com"
                },
                "properties": {
                    "resource_type": "model"
                },
                "mount_path": "/code/stable-diffusion-webui/data"
            }
        ]
    } 

    The following table describes the parameters in the preceding code.

    Parameter

    Required

    Description

    metadata.name

    Yes

    The name of the custom model service, which is unique in the same region.

    containers.image

    Yes

    Replace <region> with the ID of the current region. For example, replace the variable with cn-shanghai if your service is deployed in the China (Shanghai) region. For more information about region IDs, see Regions and zones.

    storage

    No

    Valid values:

    • oss: Compared with NAS, OSS is more convenient for data upload and download and can generate an Internet access address for generated images. However, the speed of switching models and saving images is slower. Set the storage.oss.path parameter to the path of the existing OSS bucket.

    • nas: NAS supports faster model switching and image saving. Set the storage.nfs.server parameter to the existing NAS file system.

    In this example, OSS is used.

    cloud.networking

    No

    If you set the storage parameter to nas, you must configure the VPC, including the ID of the VPC, vSwitch, and security group. The VPC must be the same as the VPC of the general-purpose NAS file system.

    API Edition

    {
        "metadata": {
            "name": "sd_async",
            "instance": 1,
            "rpc.worker_threads": 1,
            "type": "Async"
        },
        "cloud": {
            "computing": {
                "instance_type": "ml.gu7i.c16m60.1-gu30",
                "instances": null
            },
            "networking": {
                "vpc_id": "vpc-bp1t2wukzskw9139n****",
                "vswitch_id": "vsw-bp12utkudylvp4c70****",
                "security_group_id": "sg-bp11nqxfd0iq6v5g****"
            }
        },
        "queue": {
            "cpu": 1,
            "max_delivery": 1,
            "memory": 4000,
            "resource": ""
        },
        "storage": [
            {
                "oss": {
                    "path": "oss://examplebucket/aohai-singapore/",
                    "readOnly": false
                },
                "properties": {
                    "resource_type": "model"
                },
                "mount_path": "/code/stable-diffusion-webui/data"
            },
            {
                "nfs": {
                    "path": "/",
                    "server": "0c9624****-fgh60.cn-hangzhou.nas.aliyuncs.com"
                },
                "properties": {
                    "resource_type": "model"
                },
                "mount_path": "/code/stable-diffusion-webui/data"
            }
        ],
        "containers": [
            {
                "image": "eas-registry-vpc.<region>.cr.aliyuncs.com/pai-eas/stable-diffusion-webui:4.2",
                "script": "./webui.sh --listen --port 8000 --skip-version-check --no-hashing --no-download-sd-model --skip-prepare-environment --api-log --time-log --nowebui --data-dir=/code/stable-diffusion-webui/data",
                "port": 8000
            }
        ]
    } 

    The following table describes only the parameters whose configuration are different from the Standard Edition service deployment.

    Parameter

    Description

    Delete the following parameters:

    metadata.enable_webservice

    Delete this parameter to disable webserver.

    containers.script

    Delete the --filebrowser option that is specified in the containers.script parameter to accelerate the service startup.

    Add the following parameters:

    metadata.type

    Set the value to Async to enable the asynchronous service.

    metadata.rpc.worker_threads

    Set the value to 1 to allow a single instance to process only one request at a time.

    queue.max_delivery

    Set the value to 1 to disable retry after an error occurs.

    containers.script

    Add the --nowebui and --time-log options in the containers.script parameter to accelerate the service startup and log the response time, respectively.

    For more information about the parameters, see the "Standard Edition" section of this topic.

  4. Click Deploy.

Call the model service

Call the service by using the web UI

You can use the web UI to call a service of the Standard Edition, Cluster Edition WebUI, and Serverless Edition. Perform the following steps:

  1. Find the service that you want to manage and click View Web App in the Service Type column.

  2. Perform model inference.

    On the txt2img tab of the Stable Diffusion WebUI page, enter a prompt, such as cute dog, and then click Generate. The following figure provides an sample response.

    image

Call the service by using API operations

You can use API operations to call services of Standard Edition or API Edition. API calls support synchronous and asynchronous service calls. The Standard Edition service supports only synchronous calls. The API Edition service supports synchronous and asynchronous service calls.

  • Synchronous call: If you send a synchronous request, the client pauses execution and waits for the result to return.

  • Asynchronous call: The client uses the queue service of EAS to send requests to the input queue and subscribes to the inference result in the output queue.

Synchronous call

After you deploy a Standard Edition or API Edition service, you can perform the following steps to send a synchronous request.

  1. Obtain the call information.

    1. After you deploy a service, click the service name to go to the Service Details page.

    2. In the Basic Information section, click View Endpoint Information.

    3. In the Invocation Method dialog box, obtain the endpoint and token of the service.

      • If you deploy an API Edition service, obtain the endpoint and token of the service on the Public Endpoint tab of the Synchronous Call tab. image

      • If you deploy a Standard Edition service, obtain the endpoint and token of the service on the Public Endpoint tab. image

  2. To send a synchronous request, use one of the following methods.

    Use a cURL command

    Sample file:

    curl --location --request POST '<service_url>/sdapi/v1/txt2img' \
    --header 'Authorization: <token>' \
    --header 'Content-Type: application/json' \
    --data-raw '{
      "prompt":"cut dog ",
      "steps":20
    }'

    Modify the following parameters in the preceding code:

    • <service_url>: Replace the value with the endpoint that you obtained in Step 1. Delete /at the end of the endpoint.

    • <token>: Set this parameter to the token that you obtained in Step 1.

    After you run the command, the system returns the Base64-encoded image.

    Use Python code

    For information about how to send requests to the SD API, see the GitHub project wiki.

    • Example 1 (recommended): We recommend that you mount an OSS bucket to the EAS service to save the generated images. In the following example, the OSS mount path is used in the request body to save the image to OSS, and the oss2 SDK is used to download the image from OSS to your on-premises device.

      import requests
      import oss2
      from oss2.credentials import EnvironmentVariableCredentialsProvider
      
      # Step 1: Send a request. The generated image is saved to OSS. 
      url = "<service_url>"
      
      # The AccessKey pair of an Alibaba Cloud account has permissions on all API operations. Using these credentials to perform operations in OSS is a high-risk operation. We recommend that you use a RAM user to call API operations or perform routine O&M. To create a RAM user, log on to the RAM console. 
      auth = oss2.ProviderAuth(EnvironmentVariableCredentialsProvider())
      
      # In this example, the endpoint of the China (Hangzhou) region is used. Specify your actual endpoint. 
      bucket = oss2.Bucket(auth, '<endpoint>', '<examplebucket>')
      
      payload = {
          "alwayson_scripts": {
              "sd_model_checkpoint": "deliberate_v2.safetensors",
              "save_dir": "/code/stable-diffusion-webui/data/outputs"
          },
          "steps": 30,
          "prompt": "girls",
          "batch_size": 1,
          "n_iter": 2,
          "width": 576,
          "height": 576,
          "negative_prompt": "ugly, out of frame"
      }
      
      session = requests.session()
      session.headers.update({"Authorization": "<token>"})
      
      response = session.post(url=f'{url}/sdapi/v1/txt2img', json=payload)
      if response.status_code != 200:
          raise Exception(response.content)
      
      data = response.json()
      
      
      # Step 2: Obtain images from OSS and download the images to your on-premises device. 
      
      # The mount_path configuration for OSS that you specified when you deployed the service. 
      mount_path = "/code/stable-diffusion-webui/data"
      
      # The OSS path that you specified when you deployed the service. 
      oss_url = "oss://examplebucket/data-oss"
      
      for idx, img_path in enumerate(data['parameters']['image_url'].split(',')):
          # Obtain the actual URL of the generated image in OSS. 
          img_oss_path = img_path.replace(mount_path, oss_url)
          print(idx, img_oss_path, url)
      
          # Download the OSS object to an on-premises file system. Replace <examplebucket> with the name of the OSS bucket that you created. 
          bucket.get_object_to_file(img_oss_path[len("oss://examplebucket/"):], f'output-{idx}.png')

      The following table describes the key parameters.

      Parameter

      Description

      url

      Replace <service_url> with the endpoint that you obtained in Step 1. Delete / at the end of the endpoint.

      bucket

      Take note of the following parameters:

      • Replace <endpoint> with the endpoint that is used to access OSS. For example, the http://oss-cn-shanghai.aliyuncs.com endpoint is used for the China (Shanghai) region. If your service is deployed in another region, specify your actual endpoint. For more information, see Regions and endpoints.

      • Replace <examplebucket> with the name of the OSS bucket that you created.

      <token>

      Set this parameter to the token that you obtained in Step 1.

      mount_path

      The OSS mount path that you configured when you deployed the service.

      oss_url

      The OSS storage path that you configured when you deployed the service.

      If the code successfully runs, the following results are returned. You can go to the OSS console and view the generated image in the outputs directory which you mounted when you deployed the service.

      0 /code/stable-diffusion-webui/data/outputs/txt2img-grids/2024-06-26/grid-093546-9ad3f23e-a5c8-499e-8f0b-6effa75bd04f.png oss://examplebucket/data-oss/outputs/txt2img-grids/2024-06-26/grid-093546-9ad3f23e-a5c8-499e-8f0b-6effa75bd04f.png
      1 /code/stable-diffusion-webui/data/outputs/txt2img-images/2024-06-26/093536-ab4c6ab8-880d-4de6-91d5-343f8d97ea3c-3257304074.png oss://examplebucket/data-oss/outputs/txt2img-images/2024-06-26/093536-ab4c6ab8-880d-4de6-91d5-343f8d97ea3c-3257304074.png
      2 /code/stable-diffusion-webui/data/outputs/txt2img-images/2024-06-26/093545-6e6370d7-d41e-4105-960a-b4739af30c0d-3257304075.png oss://examplebucket/data-oss/outputs/txt2img-images/2024-06-26/093545-6e6370d7-d41e-4105-960a-b4739af30c0d-3257304075.png
    • Example 2: Save an image to an on-premises path. Run the following Python code to obtain the Base64-encoded image and save the image file to an on-premises directory.

      import requests
      import io
      import base64
      from PIL import Image, PngImagePlugin
      
      url = "<service_url>"
      
      payload = {
          "prompt": "puppy dog",
          "steps": 20,
          "n_iter": 2
      }
      
      session = requests.session()
      session.headers.update({"Authorization": "<token>"})
      
      
      response = session.post(url=f'{url}/sdapi/v1/txt2img', json=payload)
      if response.status_code != 200:
          raise Exception(response.content)
      
      data = response.json()
      
      # Obtain a Base64-encoded image file. We recommend that you obtain an image URL instead of the Base64-encoded image file, as shown in Example 2. 
      for idx, im in enumerate(data['images']):
          image = Image.open(io.BytesIO(base64.b64decode(im.split(",", 1)[0])))
      
          png_payload = {
              "image": "data:image/png;base64," + im
          }
          resp = session.post(url=f'{url}/sdapi/v1/png-info', json=png_payload)
      
          pnginfo = PngImagePlugin.PngInfo()
          pnginfo.add_text("parameters", resp.json().get("info"))
          image.save(f'output-{idx}.png', pnginfo=pnginfo)
      

      Take note of the following parameters:

      • <service_url>: Replace the value with the endpoint that you obtained in Step 1. Delete /at the end of the endpoint.

      • <token>: Set this parameter to the token that you obtained in Step 1.

    You can also use LoRA and ControlNet data in the request to enable a specific functionality.

    LoRA configurations

    Add <lora:yaeMikoRealistic_Genshin:1000> in the prompt parameter to use LoRA models. For more information, see LORA and alwayson_scripts example.

    Sample request body:

    {
      "prompt":"girls <lora:yaeMikoRealistic_Genshin:1>",
      "steps":20,
      "save_images":true
    }

    ControlNet configurations

    You can add the controlnet parameter in the request to perform common operations on the generated image, such as keeping the image horizontal or vertical. For more information, see the "Example: txt2img with ControlNet" section of this topic.

Call an asynchronous service

After you deploy an API Edition service, you can send an asynchronous request to the service. The client subscribes to the inference result in the output queue. Perform the following steps:

  1. Obtain the invocation information.

    Click Invocation Method in the Service Type column of the service. On the Public Endpoint tab of the Asynchronous Call tab, view the service endpoint and token.image

  2. Send asynchronous requests. You can use the SDK for Python or SDK for Java.

    Important
    • The queue service requires that input or output queues cannot exceed 8 KB in size. Take note of the following items:

      • If the request data contains an image, we recommend that you use a URL to pass the image data. SD WebUI automatically downloads and parses the image data.

      • To ensure that the response does not contain original image data, we recommend that you use the save_dir parameter to specify the path where the generated image is saved. For more information, see the "What parameters can I configure for API operations?" section of this topic.

    • By default, EAS cannot access the Internet. If you set the image_link parameter to an Internet URL, you must complete the required configurations so that EAS can access the image. For more information, see Configure network connectivity and Configure Internet access and a whitelist.

    Method 1: Use SDK for Python

    Sample code:

    import requests
    
    url = "<service_url>"
    session = requests.session()
    session.headers.update({"Authorization": "<token>"})
    
    prompts = ["cute dog", "cute cat", "cute girl"]
    
    for i in range(5):
        p = prompts[i % len(prompts)]
        payload = {
            "prompt": p,
            "steps": 20,
            "alwayson_scripts": {
                "save_dir": "/code/stable-diffusion-webui/data-oss/outputs/txt2img"
            },
        }
        response = session.post(url=f'{url}/sdapi/v1/txt2img?task_id=txt2img_{i}', json=payload)
        if response.status_code != 200:
            exit(f"send request error:{response.content}")
        else:
            print(f"send {p} success, index is {response.content}")
    
    
    for i in range(5):
        p = prompts[i % len(prompts)]
        payload = {
            "prompt": p,
            "steps": 20,
            "alwayson_scripts": {
                "save_dir": "/code/stable-diffusion-webui/data-oss/outputs/img2img",
                "image_link": "https://eas-cache-cn-hangzhou.oss-cn-hangzhou-internal.aliyuncs.com/stable-diffusion-cache/tests/boy.png",
            },
        }
        response = session.post(url=f'{url}/sdapi/v1/img2img?task_id=img2img_{i}', json=payload)
        if response.status_code != 200:
            exit(f"send request error:{response.content}")
        else:
            print(f"send {p} success, index is {response.content}")
    

    Take note of the following parameters:

    • <service_url>: Replace the value with the endpoint that you obtained in Step 1. Delete / at the end of the endpoint.

    • <token>: Replace the value with the token that you obtained in Step 1.

    Note
    • You can use SDK for Python to send POST requests to the API endpoints provided by SD WebUI. Select an endpoint based on your business requirements.

    • If you want to pass custom information to the service, specify a custom tag by using a URL parameter. For example, you can append ?task_id=task_abc to the request path to specify a tag named task_id. The tag information is included in the result specified by the tags parameter.

    If the code successfully runs, the following result is returned. Your actual result may vary. image.png

    Method 2: Use SDK for Java

    Maven is used to manage Java projects. You must add EAS SDK for Java as a dependency in the pom.xml file. For more information, see SDK for Java.

    The following sample code provides an example on how to send asynchronous requests.

    import com.aliyun.openservices.eas.predict.http.HttpConfig;
    import com.aliyun.openservices.eas.predict.http.QueueClient;
    import com.aliyun.openservices.eas.predict.queue_client.QueueUser;
    import org.apache.commons.lang3.tuple.Pair;
    
    import java.util.HashMap;
    
    public class SDWebuiAsyncPutTest {
        public static void main(String[] args) throws Exception {
            // Create a client for the queue service. 
            String queueEndpoint = "http://166233998075****.cn-hangzhou.pai-eas.aliyuncs.com";
            String queueToken = "xxxxx==";
            // The name of the input queue consists of the service name and the request path that you want to use. 
            String inputQueueName = "<service_name>/sdapi/v1/txt2img";
    
            // Create the input queue. After you add quest data to the input queue, the inference service automatically reads the request data from the input queue. 
            QueueClient inputQueue =
                new QueueClient(queueEndpoint, inputQueueName, queueToken, new HttpConfig(), new QueueUser());
            // Clear queue data. Proceed with caution. 
            // input_queue.clear();
    
            // Add request data to the input queue. 
            int count = 5;
            for (int i = 0; i < count; ++i) {
                // Create request data. 
                String data = "{\n" +
                    "    \"prompt\": \"cute dog\", \n" +
                    "    \"steps\":20,\n" +
                    "    \"alwayson_scripts\":{\n" +
                    "        \"save_dir\":\"/code/stable-diffusion-webui/data-oss/outputs/txt2img\"\n" +
                    "    }\n" +
                    "  }";
                // Create a custom tag. 
                HashMap<String, String> map = new HashMap<String, String>(1);
                map.put("task_id", "txt2img_" + i);
                Pair<Long, String> entry = inputQueue.put(data.getBytes(), map);
    
                System.out.println(String.format("send success, index is %d, request_id is %s", entry.getKey(), entry.getValue()));
                // The queue service supports multi-priority queues. You can use the put function to set the priority level of the request. The default value is 0. A value of 1 specifies a high priority. 
                //  inputQueue.put(data.getBytes(), 0L, null);
            }
            // Close the client. 
            inputQueue.shutdown();
    
    
            inputQueueName = "<service_name>/sdapi/v1/img2img";
            inputQueue =
                new QueueClient(queueEndpoint, inputQueueName, queueToken, new HttpConfig(), new QueueUser());
            for (int i = 0; i < count; ++i) {
                // Create request data. 
                String data = "{\n" +
                    "    \"prompt\": \"cute dog\", \n" +
                    "    \"steps\":20,\n" +
                    "    \"alwayson_scripts\":{\n" +
                    "        \"save_dir\":\"/code/stable-diffusion-webui/data-oss/outputs/img2img\",\n" +
                    "        \"image_link\":\"https://eas-cache-cn-hangzhou.oss-cn-hangzhou-internal.aliyuncs.com/stable-diffusion-cache/tests/boy.png\"\n" +
                    "    }\n" +
                    "  }";
                HashMap<String, String> map = new HashMap<String, String>(1);
                map.put("task_id", "img2img_" + i);
                Pair<Long, String> entry = inputQueue.put(data.getBytes(), map);
    
                System.out.println(String.format("send success, index is %d, requestId is %s", entry.getKey(), entry.getValue()));
            }
    
            // Close the client. 
            inputQueue.shutdown();
        }
    }

    The following table describes the key parameters.

    • queueEndpoint: Set the value to the endpoint that you obtained in Step 1. Refer to the sample code to configure this parameter.

    • queueToken: Set the value to the token that you obtained in Step 1.

    • <service_name>: Set the value to the name of the asynchronous service that you deployed.

    Note

    If you want to pass custom information to the service, specify a custom tag in the put function. You can refer to the sample code to configure custom tags. The tag information is included in the result within the tags parameter.

    If the code successfully runs, the following result is returned. Your actual result may vary.

    send success, index is 21, request_id is 05ca7786-c24e-4645-8538-83d235e791fe
    send success, index is 22, request_id is 639b257a-7902-448d-afd5-f2641ab77025
    send success, index is 23, request_id is d6b2e127-eba3-4414-8e6c-c3690e0a487c
    send success, index is 24, request_id is 8becf191-962d-4177-8a11-7e4a450e36a7
    send success, index is 25, request_id is 862b2d8e-5499-4476-b3a5-943d18614fc5
    send success, index is 26, requestId is 9774a4ff-f4c8-40b7-ba43-0b1c1d3241b0
    send success, index is 27, requestId is fa536d7a-7799-43f1-947f-71973bf7b221
    send success, index is 28, requestId is e69bdd32-5c7b-4c8f-ba3e-e69d2054bf65
    send success, index is 29, requestId is c138bd8f-be45-4a47-a330-745fd1569534
    send success, index is 30, requestId is c583d4f8-8558-4c8d-95f7-9c3981494007
    
    Process finished with exit code 0

  3. Subscribe to the results of the asynchronous requests.

    After the server completes processing related requests, the server automatically pushes the results to the client for efficient asynchronous communication. You can use SDK for Python or SDK for Java to subscribe to the results.

    Method 1: Use SDK for Python

    Sample code:

    import json
    import oss2
    
    from oss2.credentials import EnvironmentVariableCredentialsProvider
    from eas_prediction import QueueClient
    
    sink_queue = QueueClient('139699392458****.cn-hangzhou.pai-eas.aliyuncs.com', 'sd_async/sink')
    sink_queue.set_token('<token>')
    sink_queue.init()
    
    mount_path = "/code/stable-diffusion-webui/data-oss"
    oss_url = "oss://<examplebucket>/aohai-singapore"
    # The AccessKey pair of an Alibaba Cloud account has permissions on all API operations. Using these credentials to perform operations in OSS is a high-risk operation. We recommend that you use a RAM user to call API operations or perform routine O&M. To create a RAM user, log on to the RAM console. 
    auth = oss2.ProviderAuth(EnvironmentVariableCredentialsProvider())
    # In this example, the endpoint of the China (Hangzhou) region is used. Specify your actual endpoint. 
    bucket = oss2.Bucket(auth, 'http://oss-cn-hangzhou.aliyuncs.com', '<examplebucket>')
    
    
    watcher = sink_queue.watch(0, 5, auto_commit=False)
    for x in watcher.run():
        if 'task_id' in x.tags:
            print('index {} task_id is {}'.format(x.index, x.tags['task_id']))
        print(f'index {x.index} data is {x.data}')
        sink_queue.commit(x.index)
        try:
            data = json.loads(x.data.decode('utf-8'))
            for idx, path in enumerate(data['parameters']['image_url'].split(',')):
                url = path.replace(mount_path, oss_url)
                # Download the OSS object to an on-premises file system. 
                bucket.get_object_to_file(url[len("oss://<examplebucket>/"):], f'{x.index}-output-{idx}.png')
                print(f'save {url} to {x.index}-output-{idx}.png')
        except Exception as e:
            print(f'index {x.index} process data error {e}')
    

    The following table describes the key parameters.

    Parameter

    Description

    sink_queue

    • Replace 139699392458****.cn-hangzhou.pai-eas.aliyuncs.com with the endpoint that you obtained in the preceding step in the format shown in the sample code.

    • Replace sd_async with the name of the asynchronous service that you deployed in the format shown in the sample code.

    <token>

    Set the value to the service token that you obtained in Step 1.

    oss_url

    Set the value to the OSS path that you specified when you deployed the service.

    bucket

    Take note of the following parameters:

    • Use the endpoint that is used to access OSS. In this example, http://oss-cn-hangzhou.aliyuncs.com is used for the China (Hangzhou) region. If your service is deployed in another region, configure the parameter based on actual situation. For more information, see Regions and endpoints.

    • Replace <examplebucket> with the name of the OSS bucket that you created.

    Note
    • You can manually commit the data or set the auto_commit parameter to true to automatically commit the data.

    • If the client of the queue service stops consuming data, we recommend that you close the client to release resources.

    • You can also run a cURL command or call an API operation to subscribe to the results. For more information, see Asynchronous inference and queue service.

    The client continuously listens for results from the server by using the watcher.run() method. If the server returns no result, the client keeps waiting. If the server returns a result, the client prints the result. If the code successfully runs, the following result is returned. Your actual results may vary. You can go to the OSS console and view the generated images in the OSS path that you specified when you deployed the service.

    index 1 task_id is txt2img_0
    index 1 data is b'{"images":[],"parameters":{"id_task":null,"status":0,"image_url":"/code/stable-diffusion-webui/data/outputs/txt2img/txt2img-images/2024-07-01/075825-a2abd45f-3c33-43f2-96fb-****50329671-1214613912.png","seed":"1214613912","error_msg":"","image_mask_url":""},"info":"{\\"hostname\\": \\"***-8aff4771-5c86c8d656-hvdb8\\"}"}'
    save oss://examplebucket/xx/outputs/txt2img/txt2img-images/2024-07-01/075825-a2abd45f-3c33-43f2-96fb-****50329671-1214613912.png to 1-output-0.png
    index 2 task_id is txt2img_1
    index 2 data is b'{"images":[],"parameters":{"id_task":null,"status":0,"image_url":"/code/stable-diffusion-webui/data/outputs/txt2img/txt2img-images/2024-07-01/075827-c61af78c-25f2-47cc-9811-****aa51f5e4-1934284737.png","seed":"1934284737","error_msg":"","image_mask_url":""},"info":"{\\"hostname\\": \\"***-8aff4771-5c86c8d656-hvdb8\\"}"}'
    save oss://examplebucket/xx/outputs/txt2img/txt2img-images/2024-07-01/075827-c61af78c-25f2-47cc-9811-****aa51f5e4-1934284737.png to 2-output-0.png
    index 3 task_id is txt2img_2
    index 3 data is b'{"images":[],"parameters":{"id_task":null,"status":0,"image_url":"/code/stable-diffusion-webui/data/outputs/txt2img/txt2img-images/2024-07-01/075829-1add1f5c-5c61-4f43-9c2e-****9d987dfa-3332597009.png","seed":"3332597009","error_msg":"","image_mask_url":""},"info":"{\\"hostname\\": \\"***-8aff4771-5c86c8d656-hvdb8\\"}"}'
    save oss://examplebucket/xx/outputs/txt2img/txt2img-images/2024-07-01/075829-1add1f5c-5c61-4f43-9c2e-****9d987dfa-3332597009.png to 3-output-0.png
    index 4 task_id is txt2img_3
    index 4 data is b'{"images":[],"parameters":{"id_task":null,"status":0,"image_url":"/code/stable-diffusion-webui/data/outputs/txt2img/txt2img-images/2024-07-01/075832-2674c2d0-8a93-4cb5-9ff4-****46cec1aa-1250290207.png","seed":"1250290207","error_msg":"","image_mask_url":""},"info":"{\\"hostname\\": \\"***-8aff4771-5c86c8d656-hvdb8\\"}"}'
    save oss://examplebucket/xx/outputs/txt2img/txt2img-images/2024-07-01/075832-2674c2d0-8a93-4cb5-9ff4-****46cec1aa-1250290207.png to 4-output-0.png
    index 5 task_id is txt2img_4
    index 5 data is b'{"images":[],"parameters":{"id_task":null,"status":0,"image_url":"/code/stable-diffusion-webui/data/outputs/txt2img/txt2img-images/2024-07-01/075834-8bb15707-ff0d-4dd7-b2da-****27717028-1181680579.png","seed":"1181680579","error_msg":"","image_mask_url":""},"info":"{\\"hostname\\": \\"***-8aff4771-5c86c8d656-hvdb8\\"}"}'
    save oss://examplebucket/xx/outputs/txt2img/txt2img-images/2024-07-01/075834-8bb15707-ff0d-4dd7-b2da-****27717028-1181680579.png to 5-output-0.png

    Method 2: Use SDK for Java

    Sample code:

    import com.aliyun.openservices.eas.predict.http.HttpConfig;
    import com.aliyun.openservices.eas.predict.http.QueueClient;
    import com.aliyun.openservices.eas.predict.queue_client.DataFrame;
    import com.aliyun.openservices.eas.predict.queue_client.QueueUser;
    import com.aliyun.openservices.eas.predict.queue_client.WebSocketWatcher;
    
    public class SDWebuiAsyncWatchTest {
        public static void main(String[] args) throws Exception {
            // Create a client for the queue service. 
            String queueEndpoint = "http://166233998075****.cn-hangzhou.pai-eas.aliyuncs.com";
            String queueToken = "xxxxx==";
            // The name of the output queue consists of the service name and "/sink". 
            String sinkQueueName = "<service_name>/sink";
    
            // The output queue. The inference service processes the input data and writes the results to the output queue. 
            QueueClient sinkQueue =
                new QueueClient(queueEndpoint, sinkQueueName, queueToken, new HttpConfig(), new QueueUser());
    
            // Clear queue data. Proceed with caution. 
            //  sinkQueue.clear();
    
            // Subscribe to the queue and obtain the results. 
            WebSocketWatcher watcher = sinkQueue.watch(0L, 5L, false, false, null);
            try {
                while (true) {
                    DataFrame df = watcher.getDataFrame();
                    if (df.getTags().containsKey("task_id")) {
                        System.out.println(String.format("task_id = %s", df.getTags().get("task_id")));
                    }
                    System.out.println(String.format("index = %d, data = %s, requestId = %s", df.getIndex(), new String(df.getData()), df.getTags().get("requestId")));
                    sinkQueue.commit(df.getIndex());
                }
            } catch (Exception e) {
                System.out.println("watch error:" + e.getMessage());
                e.printStackTrace();
                watcher.close();
            }
    
            // Close the client. 
            sinkQueue.shutdown();
        }
    }

    Take note of the following parameters:

    • queueEndpoint: Set the value to the endpoint that you obtained in Step 1. You can refer to the sample code to configure this parameter.

    • queueToken: Set the value to the token that you obtained in Step 1.

    • <service_name>: Set the value to the name of the asynchronous service that you deployed.

    Note
    • You can manually commit the data or set the auto_commit parameter to true to automatically commit the data.

    • If the client of the queue service stops consuming data, we recommend that you close the client to release resources.

    • You can also run a cURL command or call an API operation to subscribe to the results. For more information, see Asynchronous inference and queue service.

    The client continuously listens for results from the server by using the watcher.getDataFrame() method. If the server returns no result, the client keeps waiting. If the server returns a result, the client prints the result. If the code successfully runs, the following result is returned. Your actual results may vary. You can go to the OSS console and view the generated images in the OSS path that you specified when you deployed the service.

    2023-08-04 16:17:31,497 INFO [com.aliyun.openservices.eas.predict.queue_client.WebSocketWatcher] - WebSocketClient Successfully Connects to Server: 1396993924585947.cn-hangzhou.pai-eas.aliyuncs.com/116.62.XX.XX:80
    task_id = txt2img_0
    index = 21, data = {"images":[],"parameters":{"id_task":null,"status":0,"image_url":"/code/stable-diffusion-webui/data-oss/outputs/txt2img/txt2img-images/2023-08-04/54363a9d-24a5-41b5-b038-2257d43b8e79-412510031.png","seed":"412510031","error_msg":"","total_time":2.5351321697235107},"info":""}, requestId = 05ca7786-c24e-4645-8538-83d235e791fe
    task_id = txt2img_1
    index = 22, data = {"images":[],"parameters":{"id_task":null,"status":0,"image_url":"/code/stable-diffusion-webui/data-oss/outputs/txt2img/txt2img-images/2023-08-04/0c646dda-4a53-43f4-97fd-1f507599f6ae-2287341785.png","seed":"2287341785","error_msg":"","total_time":2.6269655227661133},"info":""}, requestId = 639b257a-7902-448d-afd5-f2641ab77025
    task_id = txt2img_2
    index = 23, data = {"images":[],"parameters":{"id_task":null,"status":0,"image_url":"/code/stable-diffusion-webui/data-oss/outputs/txt2img/txt2img-images/2023-08-04/4d542f25-b9cc-4548-9db2-5addd0366d32-1158414078.png","seed":"1158414078","error_msg":"","total_time":2.6604185104370117},"info":""}, requestId = d6b2e127-eba3-4414-8e6c-c3690e0a487c
    task_id = txt2img_3

    After you send an asynchronous request and subscribe to the results, you can use search() methods to query the request status. For more information, see SDK for Python and SDK for Java.

EAS provides additional parameters on top of the SD WebUI API. You can configure these parameters to use advanced features and meet custom business requirements. For more information, see the "What parameters can I configure for API operations?" section of this topic.

Install extensions for enhanced features

You can configure extensions for Stable Diffusion WebUI to enable various features. PAI provides various preset extensions, such as the BeautifulPrompt extension, which is used to expand and polish a prompt. The following section uses the BeautifulPrompt extension as an example to describe how to install and use the extension.

Install the extension

You can view and install the extension on the Extensions tab of the web UI page. Perform the following steps:

  1. Find the service that you want to manage and click View Web App in the Service Type column. image

  2. On the Extensions tab of the web UI page, check whether BeautifulPrompt is selected. If the extension is not selected, select the check box in front of BeautifulPrompt and click Apply and restart UI to load the BeautifulPrompt extension. image

    When you install the extension, the web UI automatically restarts. After the extension is reloaded, you can perform model inference to check the effect.

Use the extension for inference

  1. On the BeautifulPrompt tab, enter a simple prompt in the field and click Generate to generate a detailed prompt. image

    PAI provides multiple prompt generation models. The prompt generated by each model varies. Where:

    • pai-bloom-1b1-text2prompt-sd-v2: suitable for generating prompts in complex scenarios.

    • pai-bloom-1b1-text2prompt-sd: suitable for generating prompts that describe a single object.

    You can select a model based on your business requirements. image

  2. Select the prompt that you want to use and click to txt2img next to the generated prompt.

    The txt2img tab appears and the prompt that you selected is automatically specified. image

  3. Click Generate to generate an image on the right side of the web UI page. image

    BeautifulPrompt helps you improve the image quality and add more details to the image. The following table compares the effects before and after the BeautifulPrompt extension is used:

    Prompt that you enter

    Without BeautifulPrompt

    With BeautifulPrompt

    a cat

    image.png

    image.png

    a giant tiger

    image.png

    image.png

FAQ

How do I use my model and output directory?

After you deploy the service, the system automatically creates the following directory structure in the mounted OSS bucket or NAS file system: image

Take note of the following parameters:

  • models: This directory is used to store model files.

  • After you send an inference request, the system automatically generates the result file to this directory based on the preset configurations in the API code.

To load and use a new model, you can store a LoRA or Stable Diffusion model that you downloaded from the open source community or trained in the preceding specified directory. Perform the following steps:

  1. Upload the model file to the subdirectory of the models directory. For more information, see the "Upload an object" section in the Get started by using the OSS console topic.

  2. On the Elastic Algorithm Service (EAS) page, find the service that you want to manage and click image /> Restart Service in the Actions column. After the service restarts, the configuration takes effect.

  3. On the Stable Diffusion web UI page, switch the model and perform model inference.

    image

What do I do if the service freezes for a long period of time?

  • You can reopen the Stable Diffusion web application page or restart the EAS service. Perform the following steps:

    • Find the service that you want to manage and click View Web App in the Service Type column to reopen the web application page.

    • Click image />Restart Service in the Actions column of the service to restart the EAS service.

  • If the issue persists, the service may need to download models or plug-ins from the Internet. By default, EAS is not connected to the Internet. You can load an image or mount a model offline. However, you may need Internet connection to download specific plug-ins. In this case, we recommend that you find the download path of the model or plug-in by viewing the service logs, download the model, and then upload and mount the model to OSS. For more information, see How do I use my model and output directory?. If you require Internet connection, you can configure network settings. For more information, see Configure network connectivity and Configure Internet access and a whitelist.

How do I change the default language of the web application interface to English?

  1. On the web UI page, click 设置.

  2. In the left-side navigation pane, click 用户界面. In the lower part of the 本地化 page, select .

  3. In the upper part of the Stable Diffusion WebUI page, click 保存设置, and then click 重载前端.

    Refresh the web application interface. The page is displayed in English.

How do I manage my file system?

When you deploy a service of Standard Edition, API Edition, or Cluster Edition WebUI, the system automatically adds the --filebrowser parameter to the command. You can directly manage your file system by using the web UI. Perform the following steps:

  1. After the service is deployed, click View Web App in the Service Type column.

  2. On the web UI page, click the FileBrowser tab. You can view the file system, upload on-premises files to the file system, or download files to your on-premises computer.

    image

What parameters can I configure when I create a service?

  • Common parameters

    Parameter

    Description

    Usage notes

    --blade

    Enables PAI-Blade to accelerate the image generation.

    We recommend that you enable the feature.

    --filebrowser

    Allows you to upload and download models or images.

    By default, this feature is enabled.

    --data-dir /code/stable-diffusion-webui/data-oss

    The path used to mount the persistent storage.

    The default path is /code/stable-diffusion-webui/. You can also use a relative path.

    --api

    The API calling mode of the web UI.

    By default, this feature is enabled.

    --enable-nsfw-censor

    By default, this feature is disabled. If you require security compliance, you can enable the content moderation feature.

    Enable the feature based on your business requirements.

    --always-hide-tabs

    Hide specific tabs.

    Enable the feature based on your business requirements.

    --min-ram-reserved 40 --sd-dynamic-cache

    Cache the Stable Diffusion model to memory.

    N/A

  • Cluster Edition parameters

    Note

    The ckpt and ControlNet models automatically load files in the public directory and the custom files.

    Parameter

    Description

    Usage notes

    --lora-dir

    Specifies the public LoRA model directory. Example: --lora-dir /code/stable-diffusion-webui/data-oss/models/Lora.

    By default, this parameter is not configured. All LoRA directories of the user are isolated and only LoRA models in the user folder are loaded. If you specify a directory, all users load the LoRA models in the public directory and the LoRA models in the user folder.

    --vae-dir

    Specifies the public Variational Autoencoder (VAE) model directory. Example: --vae-dir /code/stable-diffusion-webui/data-oss/models/VAE.

    By default, this parameter is not configured. All VAE directories of the user are isolated and only VAE models in the user folder are loaded. If you specify a directory, all users load the VAE models in the public directory.

    --gfpgan-dir

    Specifies the public GFPGAN model directory. Example: --gfpgan-dir /code/stable-diffusion-webui/data-oss/models/GFPGAN.

    By default, this parameter is not configured. All GFPGAN directories of the user are isolated and only GFPGAN models in the user folder are loaded. If you specify a directory, all users load the GFPGAN models in the public directory.

    --embeddings-dir

    Specifies the public embeddings model directory. Example: --embeddings-dir /code/stable-diffusion-webui/data-oss/embeddings.

    By default, this parameter is not configured. All embeddings directories of the user are isolated and only embedding models in the user folder are loaded. If you specify a directory, all users load the embedding models in the public directory.

    --hypernetwork-dir

    Specify the public hypernetwork model directory. Example: --hypernetwork-dir /code/stable-diffusion-webui/data-oss/models/hypernetworks.

    By default, this parameter is not configured. All hypernetwork directories of the user are isolated and only embedding models in the user folder are loaded. If you specify a directory, all users load the hypernetwork models in the public directory.

    --root-extensions

    Uses the extension directory as a public directory. If you configure this parameter, all users can see the same extensions.

    If you want to install or manage extensions in a centralized manner, use this parameter.

What parameters can I configure for an API operation?

EAS provides additional parameters on top of the SD WebUI API. You can configure these parameters to use advanced features and meet custom business requirements.

  • Configure the SD model, the VAE model, and the path to save the generated images.

  • Use URL parameters to send requests, for which status codes are returned.

  • Access the generated images by using URLs, including the images that are processed by ControlNet.

Sample code:

Example: txt2img

Sample request body:

{
      "alwayson_scripts": {
          "sd_model_checkpoint": "deliberate_v2.safetensors",  
          "save_dir": "/code/stable-diffusion-webui/data-oss/outputs",
          "sd_vae": "Automatic"
      },
      "steps": 20,
      "prompt": "girls",          
      "batch_size": 1,                                            
      "n_iter": 2,                                                 
      "width": 576, 
      "height": 576,
      "negative_prompt": "ugly, out of frame"
  }

Parameters:

  • sd_model_checkpoint: the SD model that you want to use. The SD model can automatically switch to a foundation model.

  • sd_vae: the VAE model that you want to use.

  • save_dir: the path in which to save the generated images.

Sample synchronous request:

# Sample asynchronous request: 

curl --location --request POST '<service_url>/sdapi/v1/txt2img' \
--header 'Authorization: <token>' \
--header 'Content-Type: application/json' \
--data-raw '{
      "alwayson_scripts": {
          "sd_model_checkpoint": "deliberate_v2.safetensors",
          "save_dir": "/code/stable-diffusion-webui/data-oss/outputs",
          "sd_vae": "Automatic"
      },
      "prompt": "girls",          
      "batch_size": 1,                                            
      "n_iter": 2,                                                 
      "width": 576, 
      "height": 576,
      "negative_prompt": "ugly, out of frame"
  }'

Sample response to a synchronous request:

{
  "images": [],
  "parameters": {
    "id_task": "14837",
    "status": 0,
    "image_url": "/code/stable-diffusion-webui/data-oss/outputs/txt2img-grids/2023-07-24/grid-29a67c1c-099a-4d00-8ff3-1ebe6e64931a.png,/code/stable-diffusion-webui/data-oss/outputs/txt2img-images/2023-07-24/74626268-6c81-45ff-90b7-faba579dc309-1146644551.png,/code/stable-diffusion-webui/data-oss/outputs/txt2img-images/2023-07-24/6a233060-e197-4169-86ab-1c18adf04e3f-1146644552.png",
    "seed": "1146644551,1146644552",
    "error_msg": "",
    "total_time": 32.22393465042114
  },
  "info": ""
}

Sample asynchronous request:

# Send the request data to the input queue. 
curl --location --request POST '<service_url>/sdapi/v1/txt2img' \
--header 'Authorization: <token>' \
--header 'Content-Type: application/json' \
--data-raw '{
    "alwayson_scripts": {
        "sd_model_checkpoint": "deliberate_v2.safetensors",
        "id_task": "14837",
        "uid": "123",
        "save_dir": "tmp/outputs"
    },
    "prompt": "girls",
    "batch_size": 1,
    "n_iter": 2,
    "width": 576,
    "height": 576,
    "negative_prompt": "ugly, out of frame"
}'

Example: img2img

Sample request body:

{
    "alwayson_scripts": {
        "image_link":"https://eas-cache-cn-hangzhou.oss-cn-hangzhou-internal.aliyuncs.com/stable-diffusion-cache/tests/boy.png",
        "sd_model_checkpoint": "deliberate_v2.safetensors",
        "sd_vae": "Automatic",
        "save_dir": "/code/stable-diffusion-webui/data-oss/outputs"
    },
    "prompt": "girl",
    "batch_size": 1,                                            
    "n_iter": 2,                                                 
    "width": 576, 
    "height": 576,
    "negative_prompt": "ugly, out of frame",
    "steps": 20, # Sampling steps
    "seed": 111,   
    "subseed": 111, # Variation seed
    "subseed_strength": 0, # Variation strength
    "seed_resize_from_h": 0, # Resize seed from height
    "seed_resize_from_w": 0, # Resize seed from width
    "seed_enable_extras": false, # Extra
    "sampler_name": "DDIM", # Sampling method
    "cfg_scale": 7.5, # CFG Scale
    "restore_faces": true, # Restore faces
    "tiling": false, # Tiling
    "init_images": [], # image base64 str, default None
    "mask_blur": 4, # Mask blur
    "resize_mode": 1, # 0 just resize, 1 crop and resize, 2 resize and fill, 3 just resize
    "denoising_strength": 0.75, # Denoising strength
    "inpainting_mask_invert": 0, #int, index of ['Inpaint masked', 'Inpaint not masked'], Mask mode
    "inpainting_fill": 0, #index of ['fill', 'original', 'latent noise', 'latent nothing'], Masked content
    "inpaint_full_res": 0, # index of ["Whole picture", "Only masked"], Inpaint area
    "inpaint_full_res_padding": 32, #minimum=0, maximum=256, step=4, value=32, Only masked padding, pixels
    #"image_cfg_scale": 1, # resized by scale
    #"script_name": "Outpainting mk2", # The name of the script. Skip this field if you do not use a script.
    #"script_args": ["Outpainting", 128, 8, ["left", "right", "up", "down"], 1, 0.05] # The parameters of the script in the following order: fixed fields, pixels, mask_blur, direction, noise_q, and color_variation.
}

Sample response:

{
    "images":[],
    "parameters":{
        "id_task":"14837",
        "status":0,
        "image_url":"/data/api_test/img2img-grids/2023-06-05/grid-0000.png,/data/api_test/img2img-images/2023-06-05/00000-1003.png,/data/api_test/img2img-images/2023-06-05/00001-1004.png",
        "seed":"1003,1004",
        "error_msg":""
    },
    "info":""
}

Example: txt2img with ControlNet

Sample request body:

{
    "alwayson_scripts": {
        "sd_model_checkpoint": "deliberate_v2.safetensors", # The name of the model. 
        "save_dir": "/code/stable-diffusion-webui/data-oss/outputs",
        "controlnet":{
            "args":[
                {
                    "image_link": "https://pai-aigc-dataset.oss-cn-hangzhou.aliyuncs.com/pixabay_images/00008b87bf3ff6742b8cf81c358b9dbc.jpg",
                    "enabled": true, 
                    "module": "canny", 
                    "model": "control_v11p_sd15_canny", 
                    "weight": 1, 
                    "resize_mode": "Crop and Resize", 
                    "low_vram": false, 
                    "processor_res": 512, 
                    "threshold_a": 100, 
                    "threshold_b": 200, 
                    "guidance_start": 0, 
                    "guidance_end": 1, 
                    "pixel_perfect": true, 
                    "control_mode": "Balanced", 
                    "input_mode": "simple", 
                    "batch_images": "", 
                    "output_dir": "", 
                    "loopback": false
                }
            ]
        }
    },
    # Key parameters.
    "prompt": "girls",          
    "batch_size": 1,                                            
    "n_iter": 2,                                                 
    "width": 576, 
    "height": 576,
    "negative_prompt": "ugly, out of frame"
}

Sample response:

{
    "images":[],
    "parameters":{
        "id_task":"14837",
        "status":0,
        "image_url":"/data/api_test/txt2img-grids/2023-06-05/grid-0007.png,/data/api_test/txt2img-images/2023-06-05/00014-1003.png,/data/api_test/txt2img-images/2023-06-05/00015-1004.png",
        "seed":"1003,1004",
        "error_msg":"",
        "image_mask_url":"/data/api_test/controlnet_mask/2023-06-05/00000.png,/data/api_test/controlnet_mask/2023-06-05/00001.png"
    },
    "info":""
}

References

You can also use EAS to deploy the following items:

  • You can deploy an LLM application that can be called by using the web UI or API operations. After the LLM application is deployed, use the LangChain framework to integrate enterprise knowledge bases into the LLM application and implement intelligent Q&A and automation features. For more information, see Quickly deploy open source LLMs in EAS.

  • You can deploy an AI video generation model service by using ComfyUI and Stable Video Diffusion models. This helps you complete tasks such as short video generation and animation on social media platforms. For more information, see Use ComfyUI to deploy an AI video generation model service.

  • You can deploy a Retrieval-Augmented Generation (RAG)-based LLM chatbot that is suitable for Q&A, summarization, and other natural language processing (NLP) tasks that rely on specific knowledge bases. For more information, see RAG-based LLM chatbot.

  • In AI painting scenarios, you can use a trained LoRA model in an SD service as an auxiliary model to improve the painting performance. For more information, see Deploy a LoRA SD model by using Kohya_ss in EAS.

  • For more information about the SDKs that you can use to call services, see SDKs.

  • For more information about billing rules of EAS, see Billing of EAS.