edit-icon download-icon

Import data

Last Updated: Aug 08, 2018

Upload data

Generally, user job data and execution files are imported to an E-HPC cluster using OSS.

  • If you have not created an OSS bucket yet or to know more about OSS, visit OSS homepage and Activate OSS. We highly recommend activating OSS in the same region as the E-HPC cluster to avoid cross-regional latency.

  • After activating OSS, create a bucket. For more information, see Create a bucket and Endpoints.

For more information about uploading job-related files to the OSS bucket, see OSS file upload.

Import files

Once you are logged on to OSS, you can access the VPC endpoint using HTTP to import job files.

Here, assume that your job script is test.py, the job data is test.data, the E-HPC cluster and OSS, both are located in the China East 1 (Hangzhou) region, the name of the OSS bucket containing the job files is e-hpc-quickstart, and test.py and test.data are located in the root directory of this bucket.

  1. cd /ehpcdata
  2. wget http://e-hpc-quickstart.vpc100-oss-cn-hangzhou.aliyuncs.com/test.py
  3. wget http://e-hpc-quickstart.vpc100-oss-cn-hangzhou.aliyuncs.com/test.data

If the current system image does not have wget installed, you can either use curl or the software source to install it (for example, yum install wget).

  1. curl -O http://e-hpc-quickstart.vpc100-oss-cn-hangzhou.aliyuncs.com/test.py
  2. curl -O http://e-hpc-quickstart.vpc100-oss-cn-hangzhou.aliyuncs.com/test.data

If the bucket is set to private, use the OSS console to select required files and obtain a temporary signed URL (an access link with a temporary OSSAccessKeyId and other parameters) to import them. After the custom signature becomes effective, click Copy File URL to obtain a URL for temporary external access.

Thank you! We've received your feedback.