Disclaimer: The views expressed herein are for reference only and don't necessarily represent the official views of Alibaba Cloud.
Every digital marketing team faces the same fundamental tension: content demand grows exponentially, but writing capacity does not.
A mid-sized e-commerce business might need thousands of product descriptions localised across five languages.
A B2B SaaS company needs weekly email campaigns, landing page variants for A/B tests, and ad copy refreshed monthly.
A retail brand running seasonal promotions has to produce hundreds of creative assets in a matter of days.
The traditional answer — hire more writers — doesn't scale. The emerging answer is generative AI, and more specifically, large language models (LLMs) that can be integrated directly into your marketing stack via API.
Alibaba Cloud's Qwen model family, accessible through Model Studio and the DashScope API, offers a production-ready path to doing exactly this.
In this guide, you will learn how to set up Qwen, craft effective prompts for marketing use cases, and build a repeatable pipeline that your team can use to generate high-quality copy at scale.
Qwen (通义千问) is Alibaba Cloud's family of large language models, available through Alibaba Cloud Model Studio.
The family includes several tiers suited to different use cases:
| Model | Best For | Speed |
|---|---|---|
| qwen3-max | Complex reasoning, flagship quality | Moderate |
| qwen-plus | Balanced quality and cost, everyday tasks | Fast |
| qwen3.5-flash | High-volume, latency-sensitive workloads | Very Fast |
| qwen-long | Long-document analysis (up to 10M context) | Moderate |
For most marketing copywriting workflows — product descriptions, ad copy, and email subject lines — qwen-plus offers the right balance of output quality, speed, and cost.
For high-volume bulk generation (e.g., 10,000+ product descriptions), qwen3.5-flash is the better fit due to its lower latency and tiered pricing.
All models are accessible via two interfaces:
Before writing any code, make sure you have the following in place:
# Install DashScope SDK
pip install dashscope
# OR install OpenAI SDK (for the compatible interface)
pip install openai
Store your API key as an environment variable — never hardcode credentials:
export DASHSCOPE_API_KEY="sk-your-api-key-here"
Let's verify your setup with a minimal call using both available interfaces.
import os
import dashscope
from dashscope import Generation
dashscope.base_http_api_url = 'https://dashscope-intl.aliyuncs.com/api/v1'
messages = [
{'role': 'system', 'content': 'You are an expert marketing copywriter.'},
{'role': 'user', 'content': 'Write a one-sentence tagline for a cloud storage product.'}
]
response = Generation.call(
api_key=os.getenv("DASHSCOPE_API_KEY"),
model="qwen-plus",
messages=messages,
result_format="message"
)
if response.status_code == 200:
print(response.output.choices[0].message.content)
else:
print(f"Error {response.status_code}: {response.message}")
If your team already uses OpenAI SDK conventions, you can switch to Qwen with a single line change:
import os
from openai import OpenAI
client = OpenAI(
api_key=os.getenv("DASHSCOPE_API_KEY"),
base_url="https://dashscope-intl.aliyuncs.com/compatible-mode/v1"
)
completion = client.chat.completions.create(
model="qwen-plus",
messages=[
{"role": "system", "content": "You are an expert marketing copywriter."},
{"role": "user", "content": "Write a one-sentence tagline for a cloud storage product."}
]
)
print(completion.choices[0].message.content)
Both approaches produce the same output. Use the DashScope SDK if you want access to Alibaba Cloud-specific features like streaming or extended parameters. Use the OpenAI-compatible route if you're migrating an existing workflow.
The quality of your generated copy is directly determined by the quality of your prompt. For marketing use cases, there are four principles that consistently improve output:
1. Set a clear role in the system message. Tell the model exactly what kind of writer it is — a conversion copywriter, a technical product writer, an email marketing specialist. This anchors the tone and vocabulary.
2. Provide structured context. Give the model the product name, key features, target audience, and desired tone. The more specific you are, the less variance in output.
3. Specify format and length constraints explicitly. "Write a product description in 3 sentences" produces a more consistent output than "write a product description."
4. Use examples where possible. A single example of your brand voice in the prompt dramatically improves output consistency across hundreds of generations.
The following script accepts a product dictionary and generates a structured description. This pattern is reusable across catalogues of any size.
import os
from openai import OpenAI
client = OpenAI(
api_key=os.getenv("DASHSCOPE_API_KEY"),
base_url="https://dashscope-intl.aliyuncs.com/compatible-mode/v1"
)
SYSTEM_PROMPT = """You are a senior e-commerce copywriter specialising in
clear, benefit-driven product descriptions. Your writing is concise,
persuasive, and always leads with the customer's benefit before the feature.
Never use filler phrases like 'look no further' or 'state of the art'."""
def generate_product_description(product: dict) -> str:
user_prompt = f"""
Write a product description for the following item.
Product Name: {product['name']}
Category: {product['category']}
Key Features: {', '.join(product['features'])}
Target Audience: {product['audience']}
Tone: {product['tone']}
Format:
- Headline: One compelling sentence (max 12 words)
- Body: Two to three sentences highlighting the top benefits
- CTA: One action-oriented closing sentence
Output only the formatted description. No preamble or explanation.
"""
response = client.chat.completions.create(
model="qwen-plus",
messages=[
{"role": "system", "content": SYSTEM_PROMPT},
{"role": "user", "content": user_prompt}
],
temperature=0.7,
max_tokens=300
)
return response.choices[0].message.content
# Example usage
product = {
"name": "CloudVault Pro 2TB",
"category": "Cloud Storage",
"features": ["End-to-end encryption", "Cross-device sync", "Offline access", "99.99% uptime SLA"],
"audience": "Remote-first professionals and small business owners",
"tone": "Professional and reassuring"
}
print(generate_product_description(product))
Sample output:
Headline: Secure, Seamless Storage That Works Wherever You Do
Body: CloudVault Pro keeps your files encrypted, synced, and accessible across
every device — even without an internet connection. Backed by a 99.99% uptime
SLA, it's built for teams who can't afford downtime.
CTA: Start your free 30-day trial and take control of your data today.
A/B testing ad copy requires multiple distinct variations — different headlines, different angles, different calls to action. Manually writing ten variations of the same ad is tedious; generating them with Qwen takes seconds.
import json
def generate_ad_variants(product_name: str, benefit: str, audience: str, n: int = 5) -> list:
user_prompt = f"""
Generate {n} distinct ad copy variations for a digital marketing campaign.
Product: {product_name}
Core Benefit: {benefit}
Audience: {audience}
For each variation, provide:
- Headline (max 8 words)
- Description (max 20 words)
- CTA (3-5 words)
Return ONLY a valid JSON array. Each item must have keys:
"headline", "description", "cta". No additional text.
"""
response = client.chat.completions.create(
model="qwen-plus",
messages=[
{"role": "system", "content": "You are a performance marketing specialist who writes high-converting ad copy for digital campaigns."},
{"role": "user", "content": user_prompt}
],
temperature=0.85,
max_tokens=800
)
raw = response.choices[0].message.content
# Strip markdown code fences if present
raw = raw.replace("```json", "").replace("```", "").strip()
return json.loads(raw)
variants = generate_ad_variants(
product_name="ApsaraDB RDS",
benefit="Fully managed database that scales automatically",
audience="Backend developers and DevOps teams",
n=5
)
for i, v in enumerate(variants, 1):
print(f"\n--- Variant {i} ---")
print(f"Headline: {v['headline']}")
print(f"Description: {v['description']}")
print(f"CTA: {v['cta']}")
The temperature=0.85 setting produces varied creative output while keeping it grounded. Lower this to 0.5 if you need more consistent, predictable variations. Requesting JSON output directly avoids post-processing complexity.
Email subject lines have an outsized impact on campaign performance — open rates often hinge entirely on this single line. Generating and testing multiple subject line angles is a strong use of LLM-powered automation.
def generate_subject_lines(
campaign_theme: str,
audience_segment: str,
email_goal: str,
count: int = 10
) -> list[str]:
user_prompt = f"""
Generate {count} email subject lines for a marketing campaign.
Campaign Theme: {campaign_theme}
Audience: {audience_segment}
Goal: {email_goal}
Rules:
- Each subject line must be under 50 characters
- Vary the approach: use curiosity, urgency, benefit, personalisation, and question formats
- Do not repeat the same angle twice
- Avoid spam trigger words like "free", "guaranteed", or excessive punctuation
Return ONLY a JSON array of strings. No numbering, no explanations.
"""
response = client.chat.completions.create(
model="qwen-plus",
messages=[
{"role": "system", "content": "You are an email marketing specialist with expertise in open rate optimisation."},
{"role": "user", "content": user_prompt}
],
temperature=0.9,
max_tokens=500
)
raw = response.choices[0].message.content.replace("```json", "").replace("```", "").strip()
return json.loads(raw)
subject_lines = generate_subject_lines(
campaign_theme="Year-end cloud cost optimisation tips",
audience_segment="IT managers at mid-sized enterprises",
email_goal="Drive webinar registrations",
count=10
)
for line in subject_lines:
print(f" • {line}")
For large-scale catalogue work — generating thousands of product descriptions overnight — you need batch processing with error handling and rate limiting.
import time
import logging
from typing import Optional
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
def generate_with_retry(
prompt: str,
system: str,
model: str = "qwen-plus",
max_retries: int = 3,
delay: float = 2.0
) -> Optional[str]:
for attempt in range(max_retries):
try:
response = client.chat.completions.create(
model=model,
messages=[
{"role": "system", "content": system},
{"role": "user", "content": prompt}
],
temperature=0.7,
max_tokens=400
)
return response.choices[0].message.content
except Exception as e:
logger.warning(f"Attempt {attempt + 1} failed: {e}")
if attempt < max_retries - 1:
time.sleep(delay * (attempt + 1)) # Exponential back-off
logger.error("All retry attempts exhausted.")
return None
def batch_generate_descriptions(products: list[dict]) -> list[dict]:
results = []
for i, product in enumerate(products):
logger.info(f"Processing {i+1}/{len(products)}: {product['name']}")
prompt = f"""
Product: {product['name']}
Features: {', '.join(product.get('features', []))}
Audience: {product.get('audience', 'General')}
Write a 2-sentence product description. Lead with the primary benefit.
"""
copy = generate_with_retry(
prompt=prompt,
system="You are a concise product copywriter. Write benefit-first descriptions."
)
results.append({
**product,
"generated_description": copy,
"status": "success" if copy else "failed"
})
time.sleep(0.5) # Respect API rate limits
return results
For production workloads, consider using qwen3.5-flash instead of qwen-plus in the batch function — it processes requests faster and at lower cost, which matters significantly at scale.
Raw LLM output can vary in tone, especially at higher temperature settings. The most reliable way to enforce brand voice is to include a brand voice reference block in every system prompt, along with explicit do/don't rules.
BRAND_VOICE_BLOCK = """
BRAND VOICE GUIDELINES:
- Tone: Confident but not arrogant. Direct but not blunt.
- Vocabulary: Technical precision where needed; plain language everywhere else.
- Always write in active voice.
- Never use: "leverage", "synergy", "game-changing", "cutting-edge", "state of the art"
- Always use: Second person ("you", "your team") not third person ("businesses", "companies")
- Length: Shorter is almost always better. Cut every unnecessary word.
BRAND VOICE EXAMPLE (match this style):
"Deploy in minutes. Scale without limits. Pay only for what you use."
"""
Prepend this block to your system prompt for every generation call. As your guidelines evolve, update this block centrally — all downstream calls inherit the change automatically.
Understanding the cost implications of your pipeline helps you choose the right model for each task.
| Use Case | Recommended Model | Why |
|---|---|---|
| Single product description | qwen-plus |
Best quality-to-cost ratio for low volume |
| Batch of 1,000+ descriptions | qwen3.5-flash |
Lower latency, tiered pricing at scale |
| Complex multilingual copy | qwen3-max |
Strongest cross-language performance |
| Email subject line A/B variants | qwen-plus |
Speed and quality balanced |
| Brand voice fine-tuning research | qwen3-max |
Richer contextual understanding |
Alibaba Cloud Model Studio uses tiered pricing based on input tokens, which means cost-per-request decreases as volume increases — a meaningful advantage for teams running large-scale content operations.
Monitor your usage directly through the Alibaba Cloud Model Studio console to set spending alerts and track token consumption by project.
The scripts above are building blocks. In practice, you will want to integrate the Qwen API into your existing tools. Common integration patterns include:
Spreadsheet-triggered generation: Connect a Google Sheets or local CSV to your Python script. Marketing teams input product data in rows; the script reads each row, calls the API, and writes the generated copy back to an adjacent column.
CMS integration: For teams using headless CMS platforms, set up a webhook or scheduled job that calls the DashScope API to populate draft content fields automatically when a new product SKU is added.
Email platform workflow: Connect the subject line generator to your email marketing workflow. Before sending a campaign, generate ten subject line variants, pick the top two based on your criteria, and run them as a true A/B test.
Internal review loop: Always build a human review step into your pipeline. Qwen produces strong drafts, but a human reviewer should check for accuracy, brand alignment, and any market-specific sensitivities before content is published.
A few practical caveats for production deployments:
Factual accuracy: LLMs are generative, not retrieval-based. Always have a human verify that product specifications in generated descriptions are accurate. Feed the model structured product data rather than asking it to infer specifications.
Consistency at scale: Even with a strong system prompt, output can vary across a large batch. Consider adding a lightweight post-processing step that checks for prohibited words or phrases using simple string matching.
Language and locale: Qwen has strong multilingual capabilities, but always have native speakers review copy in languages other than English or Mandarin before publishing to those markets.
API key security: Store your DASHSCOPE_API_KEY in environment variables or a secrets manager — never commit it to version control.
In this guide, you built a working AI copywriting pipeline using Qwen on Alibaba Cloud, covering:
The same pipeline pattern extends naturally to other content types: social media captions, landing page headlines, push notification copy, and localised content for different regions.
To go further, explore Alibaba Cloud Mode Studio for the full model catalogue, and review the DashScope API reference for advanced parameters including streaming, function calling, and multimodal inputs.
For teams looking to build a complete marketing data and analytics layer alongside their content pipeline, DataWorks and Quick BI on Alibaba Cloud provide the infrastructure to measure what your AI-generated content actually produces in performance terms.
Architectural Blueprint for eCommerce Customer Support on Alibaba Cloud
9 posts | 2 followers
FollowAlibaba Cloud Community - September 15, 2022
Alibaba Cloud Indonesia - March 16, 2026
Alibaba Cloud Community - September 27, 2025
5991570251130339 - January 31, 2023
plavookac - February 13, 2026
Alibaba Cloud Community - June 14, 2024
9 posts | 2 followers
Follow
Intelligent Robot
A dialogue platform that enables smart dialog (based on natural language processing) through a range of dialogue-enabling clients
Learn More
ApsaraVideo Media Processing
Transcode multimedia data into media files in various resolutions, bitrates, and formats that are suitable for playback on PCs, TVs, and mobile devices.
Learn More
Platform For AI
A platform that provides enterprise-level data modeling services based on machine learning algorithms to quickly meet your needs for data-driven operations.
Learn More
Realtime Compute for Apache Flink
Realtime Compute for Apache Flink offers a highly integrated platform for real-time data processing, which optimizes the computing of Apache Flink.
Learn MoreMore Posts by Kalpesh Parmar