Get started
Model Gallery offers a range of pre-trained large language models (LLMs) from the open-source community. In Model Gallery, you can perform model training (fine-tuning), distillation, compression, evaluation, and deployment without coding. Use cases:
Fine-tune, evaluate, compress, and deploy a Qwen2.5-Coder model
Fine-tune, evaluate, compress, and deploy a DistilQwen2 model
Develop a data augmentation and model distillation solution for LLMs
Advanced usage
If Model Gallery does not include the model you require, or if the training and deployment capabilities do not meet your needs, consider using Data Science Workshop (DSW) and Deep Learning Containers (DLC) for model fine-tuning and training. Then, deploy and run your model in Elastic Algorithm Service (EAS).
Development stage | Use cases |
Training | |
Deployment |
Lingjun intelligent computing service
Lingjun intelligent computing service is tailored for large-scale deep learning scenarios. It offers a comprehensive heterogeneous computing resource and AI engineering platform. Get started with Lingjun through the following use case: