All Products
Search
Document Center

Platform For AI:One-click deployment of the Step1X-Edit model

Last Updated:May 07, 2025

Step1X-Edit is a state-of-the-art open source image editing model launched by Step Star and aims to improve editing precision and image fidelity. By integrating the multimodal language technology with a diffusion image decoder, the model demonstrates excellent performance in various editing tasks and can meet professional image editing needs. Platform for AI (PAI) fully supports this model, allowing you to easily deploy and call the model in Model Gallery with a few clicks.

Model introduction

Step Star has officially released the Step1X-Edit model as an open source model for image editing. The model integrates a multimodal large language model (LLM) with a diffusion image decoder, achieving significant performance improvements in image editing. Step1X-Edit has a total of 19 billion parameters, featuring high-precision editing capabilities and image fidelity and demonstrating multiple technical advantages.

  • Precise semantic parsing: The model accurately understands user editing instructions and parses them at the semantic level, ensuring that editing results match the user intent.

  • Identity consistency preservation: During editing, the model maintains identity consistency in images, ensuring that the main subject features remain unaffected.

  • High-precision region-level control: The model supports precise control and editing of specific regions in images, achieving fine-grained image modifications.

  • Rich task support: The model supports up to 11 types of common image editing tasks, including text replacement and style transfer, meeting users' broad needs in diverse image editing scenarios.

  • Excellent performance: In the latest image editing benchmark GEdit-Bench, Step1X-Edit performs exceptionally well in semantic consistency, image quality, and overall score, demonstrating its leading position in the field of image editing.

For more information, see stepfun-ai/Step1X-Edit.

Environment requirement

To deploy the Step1X-Edit model, we recommend that you use GPUs with 48 GB or more GPU memory.

Deploy the model

  1. Go to the Model Gallery page.

    1. Log on to the PAI console.

    2. In the upper-left corner, select a region based on your business requirements.

    3. In the left navigation bar, click Workspaces. On the Workspaces page, find the workspace that you want to manage and click its name.

    4. In the left navigation bar, choose QuickStart > Model Gallery.

  2. In the model list on the right side of the Model Gallery page, search for Step1X-Edit and click the model card.

  3. In the upper-right corner of the model details page, click Deploy to deploy the model to the Elastic Algorithm Service (EAS) inference service platform.

    image

Call the model service

You can call the the Step1X-Edit model service on a web UI or by calling an API.

Call the model service on a web UI

In the upper-right corner of the model service details page, click View Web App.

image

On the web UI, upload an image, enter a prompt, and then click Generate to generate an image.

daf55879bf7aeed0c1695928c5452aa9

Call the model service by calling an API

On the model service details page, click View Call Information to obtain the service URL and token.

image

The following sample Python code provides an example on how to call the model service by calling an API:

import requests
import time

EAS_URL = "<YOUR_EAS_URL>"
EAS_TOKEN = "<YOUR_EAS_TOKEN>"


class TaskStatus:
    PENDING = "pending"
    PROCESSING = "processing"
    COMPLETED = "completed"
    FAILED = "failed"


response = requests.post(
    f"{EAS_URL}/generate",
    headers={
        "Authorization": f"{EAS_TOKEN}"
    },
    json={
        "prompt": "A spaceship orbiting Earth",
        "seed": 42,
        "neg_prompt": "low quality, blurry",
        "infer_steps": 28,
        "cfg_scale": 6,
        "image": "<Base64 encoding of your image>"
    }
)
task_id = response.json()["task_id"]
print(f"Task ID: {task_id}")

while True:
    status_response = requests.get(
        f"{EAS_URL}/tasks/{task_id}/status",
        headers={
            "Authorization": f"{EAS_TOKEN}"
        })
    status = status_response.json()

    print(f"Current status: {status['status']}")

    if status["status"] == TaskStatus.COMPLETED:
        print("Image ready!")
        break
    elif status["status"] == TaskStatus.FAILED:
        print(f"Failed: {status['error']}")
        exit(1)

    time.sleep(5)

image_response = requests.get(
    f"{EAS_URL}/tasks/{task_id}/image",
    headers={
        "Authorization": f"{EAS_TOKEN}"
    })
with open("generated_image.jpg", "wb") as f:
    f.write(image_response.content)

print("Image downloaded successfully!")
Note
  • Replace EAS_URL with the obtained service URL.

  • Replace EAS_TOKEN with the obtained service token.

References