All Products
Search
Document Center

Platform For AI:One-click deployment of the Stepfun Step1X-Edit model

Last Updated:Dec 08, 2025

Step1X-Edit is an advanced, open-source image editing model from Stepfun designed to improve editing precision and image fidelity. By integrating multimodal language technology with a diffusion image decoder, the model demonstrates excellent performance in various editing tasks and meets professional image editing needs. The Platform for AI (PAI) fully supports this model. You can deploy and call the model with a single click in the PAI Model Gallery.

Model introduction

Stepfun has officially released and open-sourced the Step1X-Edit large image editing model. The model combines a Multimodal Large Language Model (MLLM) with a Diffusion Image Transformer (DiT), achieving significant performance improvements in image editing. Step1X-Edit has a total of 19 billion parameters. It provides high-precision editing and image fidelity, and demonstrates several technical advantages:

  • Precise semantic parsing: The model accurately understands user editing instructions and parses them at a semantic level. This ensures the editing results match the user's intent.

  • Identity consistency: During editing, the model maintains identity consistency in images. This ensures the main subject's features are not affected.

  • High-precision regional control: The model supports precise control and editing of specific image areas. This allows for fine-grained image modifications.

  • Rich task support: The model supports up to 11 types of common image editing tasks, such as text replacement and style transfer. This meets a wide range of user needs in diverse image editing scenarios.

  • Excellent performance: In the latest GEdit-Bench image editing benchmark, Step1X-Edit performs exceptionally well in semantic consistency, image quality, and overall score. This demonstrates its leading position in the image editing field.

For more information about Step1X-Edit, see stepfun-ai/Step1X-Edit.

Environment requirement

Deploying the Stepfun Step1X-Edit model requires a GPU with 48 GB or more of video memory.

Deploy the model

  1. Go to the Model Gallery page.

    1. Log on to the PAI console.

    2. In the upper-left corner, select a region.

    3. In the navigation pane on the left, choose Workspaces. Click the name of the target workspace to open it.

    4. In the navigation pane on the left, choose QuickStart > Model Gallery.

  2. On the Model Gallery page, search for Stepfun Step1X-Edit in the model list on the right and click the model card to go to the model details page.

  3. In the upper-right corner, click Deploy. Configure the inference service name and deployment resources to deploy the model to the Elastic Algorithm Service (EAS) inference service platform.

    image

Call the model

The deployed Stepfun Step1X-Edit model can be called from a web application or using an API.

Web application

On the model service details page, click View WEB App in the upper-right corner to open the web UI.

image

Upload an image, enter a prompt, and then click Generate to create an image.

daf55879bf7aeed0c1695928c5452aa9

API call

On the model service details page, click View Invocation Information to obtain the endpoint and token.

image

You can use the following Python sample code to make an API call:

import requests
import time

EAS_URL = "<YOUR_EAS_URL>"
EAS_TOKEN = "<YOUR_EAS_TOKEN>"


class TaskStatus:
    PENDING = "pending"
    PROCESSING = "processing"
    COMPLETED = "completed"
    FAILED = "failed"


response = requests.post(
    f"{EAS_URL}/generate",
    headers={
        "Authorization": f"{EAS_TOKEN}"
    },
    json={
        "prompt": "A spaceship orbiting Earth",
        "seed": 42,
        "neg_prompt": "low quality, blurry",
        "infer_steps": 28,
        "cfg_scale": 6,
        "size":1024,
        "image": "<The Base64 encoding of your image>"
    }
)
task_id = response.json()["task_id"]
print(f"Task ID: {task_id}")

while True:
    status_response = requests.get(
        f"{EAS_URL}/tasks/{task_id}/status",
        headers={
            "Authorization": f"{EAS_TOKEN}"
        })
    status = status_response.json()

    print(f"Current status: {status['status']}")

    if status["status"] == TaskStatus.COMPLETED:
        print("Image ready!")
        break
    elif status["status"] == TaskStatus.FAILED:
        print(f"Failed: {status['error']}")
        exit(1)

    time.sleep(5)

image_response = requests.get(
    f"{EAS_URL}/tasks/{task_id}/image",
    headers={
        "Authorization": f"{EAS_TOKEN}"
    })
with open("generated_image.jpg", "wb") as f:
    f.write(image_response.content)

print("Image downloaded successfully!")

Note: Replace EAS_URL and EAS_TOKEN with the endpoint and token you obtained.

References