The demo of the video editing SDK for web provides only the basic features. You can add extended features of the video editing SDK for web to the demo based on your business requirements. This topic describes the extended features of the video editing SDK for web and provides examples on how to add the extended features.
List of features
Examples
To add extended features of the video editing SDK for web to the demo, modify the code in the fe/src/ProjectDetail.jsx file. The following examples show how to use common extended features.
Dynamically obtain the version number of the video editing SDK for web
If you need to use the version number of the video editing SDK for web in your code, we recommend that you dynamically obtain the version number.
window.AliyunVideoEditor.versionChange the default subtitle text
The default subtitle text is Online Editing. To change the default subtitle text, specify the defaultSubtitleText parameter. The text cannot exceed 20 characters in length.
window.AliyunVideoEditor.init({
// Other parameters are omitted.
defaultSubtitleText: 'Custom subtitle text'
})Use custom button text
You can specify the customTexts parameter to customize the text for the Import, Save, and Generate buttons on the video editing page.
window.AliyunVideoEditor.init({
// Other parameters are omitted.
customTexts: {
importButton: 'Custom text for Import',
updateButton: 'Custom text for Save',
produceButton: 'Custom text for Generate'
}
})Change the default aspect ratio of the preview window
The default aspect ratio of the preview window is 16:9. To change the default aspect ratio of the preview window, specify the defaultAspectRatio parameter. For more information about the supported aspect ratios, see PlayerAspectRatio.
window.AliyunVideoEditor.init({
// Other parameters are omitted.
defaultAspectRatio: '9:16'
})Obtain timeline data
If you want to modify the obtained timeline data, specify the timeline in the valid format to prevent errors that may occur when you call server API operations.
window.AliyunVideoEditor.getProjectTimeline()Add the Back button
By default, the Back button is not displayed in the upper-left corner of the video editing page. You can add the Back button and implement the onBackButtonClick method to specify the logic.
window.AliyunVideoEditor.init({
// Other parameters are omitted.
onBackButtonClick: () => {
window.location.href = '/mediaEdit/list'; // Specify the page to which a user is navigated after the user clicks the Back button, such as the project list page.
}
})Add a custom logo
By default, no logo is displayed in the upper-left corner of the video editing page. To add a custom logo, specify the customTexts parameter.
window.AliyunVideoEditor.init({
// Other parameters are omitted.
customTexts: {
logoUrl: 'https://www.example.com/assets/example-logo-url.png'
}
})Add a material import page
You can implement the searchMedia method to add a page that appears after a user clicks Import. This page is used to import materials. Implementation logic: Search for materials in the media asset library, import the materials to the video editing project, and then call the AddEditingProjectMaterials method to associate the materials with the project. You must resolve the material array in the returned Promise object. For more information, see the fe/src/SearchMediaModal.jsx file in the demo.
Add a video production dialog box
To add a video production dialog box, implement the produceEditingProjectVideo method. This way, a video production dialog box appears after a user clicks Generate. Implementation logic: After a user clicks OK on the page for configuring video production parameters, the SubmitMediaProducingJob method is called. You must resolve the Promise object in the response. For more information, see the fe/src/ProduceVideoModal.jsx file in the demo.
You can also implement the produceEditingProjectVideo method to specify fixed values for production parameters and check timeline data at the business layer before video export. For example, you can specify a fixed bucket and video format. This prevents unexpected modifications. Sample code:
window.AliyunVideoEditor.init({
// Other parameters are omitted.
produceEditingProjectVideo: ({ timeline }) => { // This method is called when a user clicks Generate.
// Find all subtitle tracks that contain one or more clips.
const subtitleTracks = timeline.VideoTracks.filter((t) => t.Type === 'Subtitle' && t.VideoTrackClips.length > 0);
if (subtitleTracks.length < 2) {
// If the number of subtitle tracks that contain clips is less than 2, return an error message.
console.error('The number of subtitle tracks that contain clips is less than 2.');
return;
} else {
// If the number of subtitle tracks that contain clips is equal to or greater than 2, send a video production request to the server. The detailed steps are omitted.
}
},
});Intelligently generate subtitles
By default, the Smart generate subtitles button is not displayed on the video editing page. To add the button, specify the AsrConfig parameter. For more information, see the Demo code.
window.AliyunVideoEditor.init({
// Other parameters are omitted.
asrConfig: {
interval: 5000,
submitASRJob: async (mediaId, startTime, duration) => {
const res = await request("SubmitASRJob", {
InputFile: mediaId,
StartTime: startTime,
Duration: duration,
});
const jobId = get(res, "data.JobId");
return { jobId: jobId, jobDone: false };
},
getASRJobResult: async (jobId) => {
const res = await request("GetSmartHandleJob", {
JobId: jobId,
});
const isDone = get(res, "data.State") === "Finished";
const isError = get(res, "data.State") === "Failed";
let result;
if (res.data && res.data?.Output) {
result = JSON.parse(res.data?.Output);
}
return {
jobId,
jobDone: isDone,
result,
jobError: isError ? "The intelligent subtitling task failed": undefined,
};
},
},
});Use media asset marks
Specify media asset marks
Specify media asset marks when media assets are imported
Implement the
getEditingProjectMaterialsmethod in the video editing SDK for web to convert the media asset format for the API to the format that is specified by the SDK. In addition, you can specify the logic for converting media asset marks.const markData = item.MediaDynamicInfo.DynamicMetaData.Data; if (markData) { const dataObject = JSON.parse(markData); marks = dataObject.MediaMark.map((m) => ({ startTime: m.MarkStartTime, endTime: m.MarkEndTime, content: m.MarkContent, })); result.video.marks = marks; }Specify media asset marks when video production tasks are submitted
Convert the media asset marks specified by the mediaMarks object to the format that is specified by the API. When you call the SubmitMediaProducingJob method to submit video production tasks, specify the media asset marks returned by the SDK.
if (mediaMarks.length !== 0) { values.MediaMarks = mediaMarks.map((mark) => ({ MarkStartTime: mark.startTime, MarkEndTime: mark.endTime, MarkContent: mark.content, })); } const res = await request('SubmitMediaProducingJob', { ...values, });
Independently export each marked clip
Export the marked clips as independent videos. After a user clicks Export independently, a dialog box appears, which guides the user to configure parameters such as the name, storage location, format, resolution, and bitrate for each independent video. The Generate dialog box can be reused.
window.AliyunVideoEditor.init({ ... exportFromMediaMarks: async (data) => { // Independently export each marked clip. const projectId = ''; // Leave this parameter empty. If it is not empty, the timeline of the current project may be overwritten. // Configure the following parameters by reusing the parameters in the Generate dialog box. The request parameters for the video production task are generated. const reqParams = data.map((item, index) => { return { ProjectId: projectId, Timeline: JSON.stringify(item.timeline), OutputMediaTarget: 'oss-object', OutputMediaConfig: JSON.stringify({ // Specify the file name. If multiple files are to be exported, specify the names based on serial numbers. MediaURL: `https://example-bucket.oss-cn-shanghai.aliyuncs.com/example_${index}.mp4`, }), // Other parameters are omitted. }; }); // Submit multiple video production tasks. await Promise.all( reqParams.map(async (params) => { // Send a request to submit the video production tasks. request('SubmitMediaProducingJob',params) }), ); }, ... })
Split and export a video
Select multiple audio and video clips in the track area and click Generate As in the upper-right corner. The following section describes the features that can be added to the drop-down list:
Independently export each clip
Export the selected video clips as independent videos. After a user clicks Export independently, a dialog box appears, which guides the user to configure parameters such as the name, storage location, format, resolution, and bitrate for each independent video. The Generate dialog box can be reused.
window.AliyunVideoEditor.init({ ... exportVideoClipsSplit: async (data) => { // Export clips as independent videos. const projectId = ''; // Leave this parameter empty. If it is not empty, the timeline of the current project may be overwritten. // Configure the following parameters by reusing the parameters in the Generate dialog box. The request parameters for the video production task are generated. const reqParams = data.map((item, index) => { return { ProjectId: projectId, Timeline: JSON.stringify(item.timeline), OutputMediaTarget: 'oss-object', OutputMediaConfig: JSON.stringify({ // Specify the file name. If multiple files are to be exported, specify the names based on serial numbers. MediaURL: `https://example-bucket.oss-cn-shanghai.aliyuncs.com/example_${index}.mp4`, }), // Other parameters are omitted. }; }); // Submit multiple video production tasks. await Promise.all( reqParams.map(async (params) => { // Send a request to submit the video production tasks. request('SubmitMediaProducingJob',params) }), ); }, ... })Export the clips after merging
Merge the selected clips based on their sequence and export them as a video by using the default production settings or custom settings. After a user clicks Export after merging, a dialog box appears, which guides the user to configure parameters such as the name, storage location, format, resolution, and bitrate for the video. The Generate dialog box can be reused.
window.AliyunVideoEditor.init({ ... exportVideoClipsMerge: async (data) => { // Export clips after merging. const projectId = ''; // Leave this parameter empty. If it is not empty, the timeline of the current project may be overwritten. // Configure the following parameters by reusing the parameters in the Generate dialog box. The request parameters for the video production task are generated. const reqParam = { ProjectId: projectId, Timeline: JSON.stringify(data.timeline), OutputMediaTarget: 'oss-object', OutputMediaConfig: JSON.stringify({ // Specify the file name. MediaURL: 'https://example-bucket.oss-cn-shanghai.aliyuncs.com/example.mp4', }), }; // Send a request to submit the video production tasks. await request('SubmitMediaProducingJob', reqParam); }, ... })Export a video
Export all materials in the entire timeline as a new video based on the layer structure, chronological order, and specified effects. For more information, see Add a video production dialog box.
Intelligently generate dubbing
By default, the Smart Dubbing button is not displayed on the video editing page. To add the button, specify the ttsConfig parameter. For more information, see the Demo code.
window.AliyunVideoEditor.init({
// Other parameters are omitted.
ttsConfig: {
interval: 3000,
submitAudioProduceJob: async (text, voice, voiceConfig = {}) => {
const storageListReq = await requestGet("GetStorageList");
const tempFileStorageLocation =
storageListReq.data.StorageInfoList.find((item) => {
return item.EditingTempFileStorage;
});
if (!tempFileStorageLocation) {
throw new Error("The temporary storage path is not specified.");
}
const { StorageLocation, Path } = tempFileStorageLocation;
// An audio file is generated for intelligent dubbing and stored in Object Storage Service (OSS). The bucket, path, and filename parameters are only for reference. You can use custom parameters.
const bucket = StorageLocation.split(".")[0];
const path = Path;
const filename = `${text.slice(0, 10)}${Date.now()}`;
const editingConfig = voiceConfig.custom
? {
customizedVoice: voice,
format: "mp3",
...voiceConfig,
}
: {
voice,
format: "mp3",
...voiceConfig,
};
// 1. Submit an intelligent dubbing task.
const res1 = await request("SubmitAudioProduceJob", {
// https://www.alibabacloud.com/help/en/ims/developer-reference/api-ice-2020-11-09-submitaudioproducejob
EditingConfig: JSON.stringify(editingConfig),
InputConfig: text,
OutputConfig: JSON.stringify({
bucket,
object: `${path}${filename}`,
}),
});
if (res1.status !== 200) {
return { jobDone: false, jobError: "The current text is not recognized." };
} else {
const jobId = get(res1, 'data.JobId');
return { jobId: jobId, jobDone: false };
}
},
getAudioJobResult: async (jobId) => {
const res = await requestGet("GetSmartHandleJob",{
JobId: jobId,
});
const isJobDone = get(res, 'data.State') === 'Finished';
let isMediaReady = false;
let isError = get(res, 'data.State') === 'Failed';
let result;
let audioMedia;
let mediaId;
let asr = [];
if (res.data && res.data?.JobResult) {
try {
result = res.data.JobResult;
mediaId = result.MediaId;
if (result.AiResult) {
asr = JSON.parse(result.AiResult);
}
} catch (ex) {
console.error(ex);
}
}
if (!mediaId && res.data && res.data.Output) {
mediaId = res.data.Output;
}
const defaultErrorText = 'The current text is not recognized.';
if (mediaId) {
const mediaRes = await request("GetMediaInfo",{
MediaId: mediaId,
});
if (mediaRes.status !== 200) {
isError = true;
}
const mediaStatus = get(mediaRes, 'data.MediaInfo.MediaBasicInfo.Status');
if (mediaStatus === 'Normal') {
isMediaReady = true;
const transAudios = transMediaList([get(mediaRes, 'data.MediaInfo')]);
audioMedia = transAudios[0];
if (!audioMedia) {
isError = true;
}
} else if (mediaStatus && mediaStatus.indexOf('Fail') >= 0) {
isError = true;
}
} else if (isJobDone) {
isError = true;
}
return {
jobId,
jobDone: isJobDone && isMediaReady,
result: audioMedia,
asr,
jobError: isError ? defaultErrorText : undefined,
};
}
},
});Specify a custom font list
By default, the video editing SDK for web supports Alibaba Cloud official fonts.
// The fonts supported by Alibaba Cloud.
const FONT_FAMILIES = [
'alibaba-sans,' // Alibaba PuHuiTi
'fangsong', // Fangsong
'kaiti', // KaiTi
'SimSun', // SimSun
'siyuan-heiti', // Source Han Sans
'siyuan-songti', // Source Han Serif
'wqy-zenhei-mono', // WenQuanYi Zen Hei Mono
'wqy-zenhei-sharp', // WenQuanYi Zen Hei Sharp
'wqy-microhei', // WenQuanYi Micro Hei
'zcool-gaoduanhei', // zcool-gdh
'zcool-kuaile', // HappyZcool
'zcool-wenyiti', // zcoolwenyiti
];You can specify the
customFontListparameter to display only some of the Alibaba Cloud official fonts or rearrange them.window.AliyunVideoEditor.init({ // Other parameters are omitted. customFontList: [ // Use only the following fonts in the specified order. 'SimSun', 'kaiti', 'alibaba-sans', 'zcool-kuaile', 'wqy-microhei', ] });You can specify the
customFontListparameter to use custom fonts that are stored in an OSS bucket.ImportantMake sure that the account that can access the OSS bucket is the same as the account that submits the production task. Otherwise, the fonts cannot be downloaded.
window.AliyunVideoEditor.init({ // Other parameters are omitted. customFontList: [ // Use only the following official fonts and your own fonts. 'SimSun', 'kaiti', 'alibaba-sans', 'zcool-kuaile', 'wqy-microhei', { key: 'azhupaopaoti', // The unique key of the font. name: 'azhupaopaoti', // The name to be displayed on the page. // The URL of the font file. url: 'https://test-shanghai.oss-cn-shanghai.aliyuncs.com/xxxxx/azhupaopaoti.ttf', }, { key: 'HussarBoldWeb', // The unique key of the font. name: 'HussarBoldWeb', // The name to be displayed on the page. // The URL of the font file. url: 'https://test-shanghai.oss-cn-shanghai.aliyuncs.com/xxxxx/HussarBoldWeb.ttf', } ], /** * If the URL of your font file is dynamic, you can call the getDynamicSrc method to obtain the dynamic URL. * You can also use the getDynamicSrc method to process fonts in other places. * * @param {string} mediaId: the key of the font in customFontList. Example: HussarBoldWeb. * @param {string} mediaType: the media type. Set the value to font. * @param {string} mediaOrigin: the media source for distinguishing between public and private materials. Set the value to undefined because it is not involved in the font logic. * @param {string} InputURL: the input URL of the font file. * @returns {Promise<string>}: the actual URL of the font file. */ getDynamicSrc: (mediaId, mediaType, mediaOrigin, InputURL) { // If the OSS bucket for storing fonts is not dynamic, the input URL can be directly returned. // if (mediaType === 'font') { // return Promise.resolve(InputURL); // } // The pseudocode for processing fonts. if (mediaType === 'font') { return api.getFontUrl({ id: mediaId, url: InputURL }).then((res) => { return res.data.url; }); } // The logic for processing other materials such as videos and audio. } });You can specify the
customFontListparameter to use custom fonts that are stored in your media asset library in Intelligent Media Services (IMS).ImportantMake sure that the account that can access the IMS media asset library is the same as the account that submits the video production task. Otherwise, the fonts cannot be downloaded when the video production task is submitted.
window.AliyunVideoEditor.init({ // Other parameters are omitted. getDynamicSrc: (mediaId, mediaType, mediaOrigin, InputURL) => { const params = { MediaId: mediaId, OutputType: 'cdn' }; // Use the input URL to dynamically obtain fonts from the media asset library. if (mediaType === 'font') { params.InputURL = InputURL; delete params.MediaId; } return request('GetMediaInfo', params).then((res) => { // The following sample code is provided only for reference. We recommend that you configure the error logic. For example, you can configure the error message that is returned if FileInfoList is an empty array. return res.data.MediaInfo.FileInfoList[0].FileBasicInfo.FileUrl; }); }; });
Separate the audio track from a video
If you want to enable Separate audio track on the Basic tab in the property area, at least one of the following conditions must be met:
The current video has a proxy audio file. To declare that a media asset has a proxy audio file, add the
hasTranscodedAudio=truetag to the media asset. You can use one of the following methods for declaration. The effective scope and priority vary based on the declaration method.Add the
Media.video.hasTranscodedAudio=truetag to a video that is imported to the project. This tag indicates that the video has a proxy audio file and takes effect only on the video. This method has a higher priority and is recommended.Add the
config.hasTranscodedAudio=truetag to make a global declaration when you initialize the video editor. This tag indicates that all videos that are imported to the project have proxy audio files and takes effect on all videos. This method has a lower priority. If a media asset does not have theMedia.video.hasTranscodedAudio=truetag, the global declaration takes effect. In this case, the media asset can be used for audio track separation. Otherwise, theMedia.video.hasTranscodedAudio=truetag takes effect.
The current video contains audio tracks, and the length of the original video is no longer than 30 minutes.
Generate a proxy audio file
If a video is stored in IMS, you can use ApsaraVideo Media Processing Service (MPS) to transcode the audio of the video. After the audio is transcoded, the proxy audio file is generated on the VideoURL tab of the media asset details page.
You can perform audio transcoding on a video by using one of the following methods:
On the Audio/Video page, find the video that you want to transcode, click Media Processing in the Actions column, and then select a transcoding template or workflow.
On the Task Management page, create an audio transcoding task.
On the Upload Audio/Video page, set the Media Processing parameter to Media Processing after Upload and select a workflow related to audio transcoding.
Add the hasTranscodedAudio=true tag to a video
When you import media assets, the searchMedia and getEditingProjectMaterials methods need to check whether the media assets have transcoded audio during data conversion.
// The video editing SDK for web does not provide the request method. The following sample code is provided only for reference. You can use a network library such as Axios based on your business requirements.
window.AliyunVideoEditor.init({
...,
getEditingProjectMaterials: () => {
if (projectId) { // Write your own code to save the project ID.
return request('GetEditingProjectMaterials', { // https://www.alibabacloud.com/help/en/ims/developer-reference/api-ice-2020-11-09-geteditingprojectmaterials
ProjectId: projectId
}).then((res) => {
const data = res.data.MediaInfos;
return transMediaList(data); // Transform data. For more information, see the following section.
});
}
return Promise.resolve([]);
},
...
});
/**
* Convert the material information on the server to the format that is supported by the video editing SDK for web.
* You can specify the hasTranscodedAudio parameter for a video to mark whether the video has a proxy audio file.
*/
function transMediaList(data) {
if (!data) return [];
if (Array.isArray(data)) {
return data.map((item) => {
const basicInfo = item.MediaBasicInfo;
const fileBasicInfo = item.FileInfoList[0].FileBasicInfo;
const mediaId = basicInfo.MediaId;
const result = {
mediaId
};
const mediaType = basicInfo.MediaType
result.mediaType = mediaType;
if (mediaType === 'video') {
result.video = {
title: fileBasicInfo.FileName,
...,
// Specify whether the video has a proxy audio file.
hasTranscodedAudio: !!getTranscodedAudioFileFromFileInfoList(item?.FileInfoList || []),
// If the useDynamicSrc parameter is set to false, you must specify the URL of the proxy audio file. Otherwise, you do not need to specify the URL of the proxy audio file.
agentAudioSrc: '*'
};
...
} else if (mediaType === 'audio') {
...
} else if (mediaType === 'image') {
...
}
return result;
});
} else {
return [data];
}
}
/**
* Query the information about the transcoded audio file from the FileInfoList parameter of the video.
* @param {list<FileInfo>} fileInfoList
* @returns FileInfo | undefined
* The value of the MediaInfo.FileInfoList parameter returned by the ListMediaBasicInfos or SearchMedia method contains only information about the source file. The GetMediaInfo or BatchGetMediaInfos method returns the information about all streams.
*/
export const getTranscodedAudioFileFromFileInfoList = (fileInfoList = []) => {
if (!fileInfoList.length) return;
// Set the FileType parameter to transcode_file to filter out the transcoded audio.
const transcodedAudioFiles = fileInfoList.filter((item = {}) => {
return (
item?.FileBasicInfo?.FileType === 'transcode_file' &&
getFileType(item?.FileBasicInfo?.FileName) === MEDIA_TYPE.AUDIO
);
});
if (transcodedAudioFiles.length) {
const mp3FileInfo = fileInfoList.find(
(item = {}) => getFileExtension(item?.FileBasicInfo?.FileName).toUpperCase() === 'MP3'
);
// Preferentially return MP3 files.
return mp3FileInfo || transcodedAudioFiles[0];
}
};Load proxy audio
Based on whether the editor can dynamically obtain the URLs of media assets, the video editing SDK for web performs the following operations to separate the audio track from a video that has a proxy audio file:
(Common) The video editing SDK for web dynamically obtains the resource URL. After the audio track is separated, the audio URL is pulled. The audio URL is returned by the getDynamicSrc method and passed into the SDK. If the video contains the hasTranscodedAudio=true tag or the global tag
config.hasTranscodedAudio=true, the video editing SDK for web unconditionally uses the audio URL returned by the getDynamicSrc method. Therefore, the correct audio URL that is returned by themediaTypeparameter of the getDynamicSrc method can be used to accelerate the loading and drawing of waveforms.The
Media.video.agentAudioSrcparameter of a static resource records the URL of the proxy audio file in the video data. If the video contains the hasTranscodedAudio=true tag or the global tagconfig.hasTranscodedAudio=true, the video editing SDK for web unconditionally usesagentAudioSrc || srcfor audio waveform drawing. Therefore, the correct value of theagentAudioSrcparameter can be used to accelerate the loading and drawing of waveforms.
Add a digital human
To add a digital human, you must update the getDynamicSrc method and configure the avatarConfig parameter.
getDynamicSrc
When you add a digital human, two videos are generated. One is the original video with a green screen. The other is a digital human video that is masked with black and white. This video is used for transparent background synthesis. You need to extract the digital human video that is masked with black and white as a transparent mask and pass it to the video editing SDK for web to remove the background except for the digital human from the video.
avatarConfig
Parameter or method
Description
outputConfigs
The output resolution and bitrate of the digital human videos. You can configure this parameter based on your business requirements.
filterOutputConfig
Filters the output resolutions of different digital humans based on your business requirements. When you call the ListSmartSysAvatarModels method to obtain the digital human list and the value returned for the
OutputMaskparameter is false, the output resolution can only be 1920 × 1080 or 1080 × 1920.refreshInterval
The interval at which the status of the digital human synthesis task is polled. Unit: milliseconds.
getAvatarList
Calls the ListSmartSysAvatarModels method to query the list of official digital humans.
submitAvatarVideoJob
Submits the digital human synthesis task. If you use a temporary path to save video files, you must specify the temporary path in the IMS console in advance.
getAvatarVideoJob
Queries the status of the digital human synthesis task. After a digital human synthesis task starts, the video editing SDK for web automatically calls the
getAvatarVideoJobmethod based on the interval at which the status of the digital human synthesis task is polled. After the task is complete, make sure that both the digital human video that is masked with black and white and the video with a green screen are generated in the media asset library. The status of the task is returned each time the task is polled.getAvatar
Queries the information of the digital human based on the ID of the digital human.
window.AliyunVideoEditor.init({ // Change the logic that is used to dynamically obtain the URL. You need to extract the digital human video that is masked with black and white as a transparent mask and pass it to the video editing SDK for web. getDynamicSrc: (mediaId, mediaType) => { return request('GetMediaInfo', { // https://www.alibabacloud.com/help/en/ims/developer-reference/api-ice-2020-11-09-getmediainfo MediaId: mediaId }).then((res) => { // The following sample code is provided only for reference. We recommend that you configure the error logic. For example, you can configure the error message that is returned if FileInfoList is an empty array. const fileInfoList = get(res, 'data.MediaInfo.FileInfoList', []); let mediaUrl,maskUrl; let sourceFile = fileInfoList.find((item)=>{ return item?.FileBasicInfo?.FileType === 'source_file'; }) if(!sourceFile){ sourceFile = fileInfoList[0] } const maskFile = fileInfoList.find((item)=>{ return ( item.FileBasicInfo && item.FileBasicInfo.FileUrl && item.FileBasicInfo.FileUrl.indexOf('_mask') > 0 ); }); if(maskFile){ maskUrl = get(maskFile,'FileBasicInfo.FileUrl'); } mediaUrl = get(sourceFile,'FileBasicInfo.FileUrl'); if(!maskUrl){ return mediaUrl; } return { url: mediaUrl, maskUrl } }) }, // Configure the digital human. avatarConfig: { // Specify the resolution and bitrate of the output videos. filterOutputConfig: (item, configs) => { if (item.outputMask === false) { return [ { width: 1920, height: 1080, bitrates: [4000] }, { width: 1080, height: 1920, bitrates: [4000] }, ]; } return configs; }, // Specify the interval at which the status of the task is polled. Unit: milliseconds. refreshInterval: 2000, // Query the list of official digital humans. getAvatarList: () => { return [ { id: "default", default: true, name: 'Official digital human', getItems: async (pageNo, pageSize) => { const res = await requestGet("ListSmartSysAvatarModels", { PageNo: pageNo, PageSize: pageSize, SdkVersion: window.AliyunVideoEditor.version, }); if (res && res.status === 200) { return { total: get(res, "data.TotalCount"), items: get(res, "data.SmartSysAvatarModelList", []).map( (item) => { return { avatarName: item.AvatarName, avatarId: item.AvatarId, coverUrl: item.CoverUrl, videoUrl: item.VideoUrl, outputMask: item.OutputMask, }; } ), }; } return { total: 0, items: [], }; }, }, { id: "custom", default: false, name: "My digital human", getItems: async (pageNo, pageSize) => { const res = await requestGet("ListAvatars", { PageNo: pageNo, PageSize: pageSize, SdkVersion: window.AliyunVideoEditor.version, }); if (res && res.status === "200") { const avatarList = get(res, "data.Data.AvatarList", []); const coverMediaIds = avatarList.map((aitem) => { return aitem.Portrait; }); const coverListRes = await requestGet("BatchGetMediaInfos", { MediaIds: coverMediaIds.join(","), AdditionType: "FileInfo", }); const mediaInfos = get(coverListRes, "data.MediaInfos"); const idCoverMapper = mediaInfos.reduce((result, m) => { result[m.MediaId] = get( m, "FileInfoList[0].FileBasicInfo.FileUrl" ); return result; }, {}); return { total: get(res, "data.TotalCount"), items: avatarList.map((item) => { return { avatarName: item.AvatarName || "", avatarId: item.AvatarId, coverUrl: idCoverMapper[item.Portrait], videoUrl: undefined, outputMask: false, transparent: item.Transparent, }; }), }; } return { total: 0, items: [], }; }, }, ]; }, // Submit the digital human synthesis task. submitAvatarVideoJob: async (job) => { const storageListReq = await requestGet("GetStorageList"); const tempFileStorageLocation = storageListReq.data.StorageInfoList.find((item) => { return item.EditingTempFileStorage; }); if (tempFileStorageLocation) { const { StorageLocation, Path } = tempFileStorageLocation; /** * Check the settings of the digital human video. For example, you can check whether the video uses a transparent background. * outputMask: specifies whether to generate a mask video. If this parameter is set to true, a mask video and a video with a pure color screen in the MP4 format are generated. The value of this parameter is of the Boolean type. * transparent: specifies whether the video is transparent. If this parameter is set to false, the digital human video cannot be a WebM video with a transparent background. The value of this parameter is of the Boolean type. * */ const { outputMask, transparent } = job.avatar; const filename = outputMask || transparent === false ? `${encodeURIComponent(job.title)}-${Date.now()}.mp4` : `${encodeURIComponent(job.title)}-${Date.now()}.webm`; const outputUrl = `https://${StorageLocation}/${Path}${filename}`; const params = { UserData: JSON.stringify(job), }; if (job.type === "text") { params.InputConfig = JSON.stringify({ Text: job.data.text, }); params.EditingConfig = JSON.stringify({ AvatarId: job.avatar.avatarId, Voice: job.data.params.voice, // The speaker. This parameter is required only if the type parameter is set to text. SpeechRate: job.data.params.speechRate, // The speech rate. This parameter is required only if the type parameter is set to text. Valid values: -500 to 500. Default value: 0. PitchRate: job.data.params.pitchRate, // The tone. This parameter is required only if the type parameter is set to text. Valid values: -500 to 500. Default value: 0. Volume: job.data.params.volume, }); params.OutputConfig = JSON.stringify({ MediaURL: outputUrl, Bitrate: job.data.output.bitrate, Width: job.data.output.width, Height: job.data.output.height, }); } else { params.InputConfig = JSON.stringify({ MediaId: job.data.mediaId, }); params.EditingConfig = JSON.stringify({ AvatarId: job.avatar.avatarId, }); params.OutputConfig = JSON.stringify({ MediaURL: outputUrl, Bitrate: job.data.output.bitrate, Width: job.data.output.width, Height: job.data.output.height, }); } const res = await request("SubmitAvatarVideoJob", params); if (res.status === 200) { return { jobId: res.data.JobId, mediaId: res.data.MediaId, }; } else { throw new Error("Failed to submit the task."); } } else { throw new Error("Failed to obtain the temporary path."); } }, // Query the status of the digital human synthesis task, which is called in polling mode. getAvatarVideoJob: async (jobId) => { try { const res = await requestGet("GetSmartHandleJob", { JobId: jobId }); if (res.status !== 200) { throw new Error( `response error:${res.data && res.data.ErrorMsg}` ); } let job; if (res.data.UserData) { job = JSON.parse(res.data.UserData); } let video; let done = false; let subtitleClips; // Parse the generated subtitle. if (res.data.JobResult && res.data.JobResult.AiResult) { const apiResult = JSON.parse(res.data.JobResult.AiResult); if ( apiResult && apiResult.subtitleClips && typeof apiResult.subtitleClips === "string" ) { subtitleClips = JSON.parse(apiResult.subtitleClips); } } const mediaId = res.data.JobResult.MediaId; if (res.data.State === "Finished") { // Query the status of the generated media asset. const res2 = await request("GetMediaInfo", { MediaId: mediaId, }); if (res2.status !== 200) { throw new Error( `response error:${res2.data && res2.data.ErrorMsg}` ); } // Check whether the generated video and transparent mask video meet the requirements. const fileLength = get( res2, "data.MediaInfo.FileInfoList", [] ).length; const { avatar } = job; const statusOk = get(res2, "data.MediaInfo.MediaBasicInfo.Status") === "Normal" && (avatar.outputMask ? fileLength >= 2 : fileLength > 0); const result = statusOk ? transMediaList([get(res2, "data.MediaInfo")]) : []; video = result[0]; done = !!video && statusOk; if (done) { // Associate the new digital human material with the project. await request("AddEditingProjectMaterials", { ProjectId: projectId, MaterialMaps: JSON.stringify({ video: mediaId, }), }); } } else if (res.data.State === "Failed") { return { done: false, jobId, mediaId, job, errorMessage: `job status fail,status:${res.data.State}`, }; } // Return the status of the task. The polling stops if done is returned. return { done, jobId: res.data.JobId, mediaId, job, video, subtitleClips, }; } catch (ex) { return { done: false, jobId, errorMessage: ex.message, }; } }, getAvatar: async (id) => { const listRes = await requestGet("ListSmartSysAvatarModels", { SdkVersion: window.AliyunVideoEditor.version, PageNo: 1, PageSize: 100, }); const sysAvatar = get( listRes, "data.SmartSysAvatarModelList", [] ).find((item) => { return item.AvatarId === id; }); if (sysAvatar) { return { ...objectKeyPascalCaseToCamelCase(sysAvatar), }; } const res = await requestGet("GetAvatar", { AvatarId: id }); const item = get(res, "data.Data.Avatar"); const coverListRes = await request("BatchGetMediaInfos", { MediaIds: item.Portrait, AdditionType: "FileInfo", }); const mediaInfos = get(coverListRes, "data.MediaInfos"); const idCoverMapper = mediaInfos.reduce((result, m) => { result[m.MediaId] = get(m, "FileInfoList[0].FileBasicInfo.FileUrl"); return result; }, {}); return { avatarName: item.AvatarName || "test", avatarId: item.AvatarId, coverUrl: idCoverMapper[item.Portrait], videoUrl: undefined, outputMask: false, transparent: item.Transparent, }; }, }, })
Add custom dedicated voice
export const transVoiceGroups = (data = []) => {
return data.map(({ Type: type, VoiceList = [] }) => {
return {
type,
voiceList: VoiceList.map((item) => {
const obj = {};
Object.keys(item).forEach((key) => {
obj[lowerFirst(key)] = item[key];
});
return obj;
}),
};
});
};
const customVoiceGroups= await requestGet('ListSmartVoiceGroups').then((res)=>{
const commonItems = transVoiceGroups(get(res, 'data.VoiceGroups', []));
const customItems = [
{
type: 'Basic',
category: 'Dedicated voice', // Dedicated voice is supported in V4.12.0 and later.
emptyContent: {
description: 'No voice is available. You can create dedicated voice by yourself.',
link: '',
linkText: 'Create dedicated voice.',
},
getVoiceList: async (page, pageSize) => {
const custRes = await requestGet('ListCustomizedVoices',{ PageNo: page, PageSize: pageSize });
const items = get(custRes, 'data.Data.CustomizedVoiceList');
const total = get(custRes, 'data.Data.Total');
const kv = {
story: 'Story',
interaction: 'Interaction',
navigation: 'Navigation',
};
return {
items: items.map((it) => {
return {
desc: it.VoiceDesc || kv[it.Scenario] || it.Scenario,
voiceType: it.Gender === 'male' ? 'Male' : 'Female',
voiceUrl: it.VoiceUrl || '',
tag: it.VoiceDesc || it.Scenario,
voice: it.VoiceId,
name: it.VoiceName || it.VoiceId,
remark: it.Scenario,
demoMediaId: it.DemoAudioMediaId,
custom: true,
};
}),
total,
};
},
getVoice: async (voiceId) => {
const custRes = await requestGet('GetCustomizedVoice',{ VoiceId: voiceId });
const item = get(custRes, 'data.Data.CustomizedVoice');
const kv = {
story: 'Story',
interaction: 'Interaction',
navigation: 'Navigation',
};
return {
desc: item.VoiceDesc || kv[item.Scenario] || item.Scenario,
voiceType: item.Gender === 'male' ? 'Male' : 'Female',
voiceUrl: item.VoiceUrl || '',
tag: item.VoiceDesc || item.Scenario,
voice: item.VoiceId,
name: item.VoiceName || item.VoiceId,
remark: item.Scenario,
demoMediaId: item.DemoAudioMediaId,
custom: true,
};
},
getDemo: async (mediaId) => {
const mediaInfo = await requestGet('GetMediaInfo',{ MediaId: mediaId });
const src = get(mediaInfo, 'data.MediaInfo.FileInfoList[0].FileBasicInfo.FileUrl');
return {
src: src,
};
},
},
{
type: 'General',
category: 'Dedicated voice',
emptyContent: {
description: 'No voice is available. You can create dedicated voice by yourself.',
link: '',
linkText: 'Create dedicated voice.',
},
getVoiceList: async (page, pageSize) => {
const custRes = await requestGet('ListCustomizedVoices',{ PageNo: page, PageSize: pageSize, Type: 'Standard', });
const items = get(custRes, 'data.Data.CustomizedVoiceList');
const total = get(custRes, 'data.Data.Total');
return {
items: items.map((it) => {
return {
desc: it.VoiceDesc,
voiceType: it.Gender === 'male' ? 'Male' : 'Female',
voiceUrl: it.VoiceUrl || '',
tag: it.VoiceDesc,
voice: it.VoiceId,
name: it.VoiceName || it.VoiceId,
remark: it.Scenario,
demoMediaId: it.DemoAudioMediaId,
custom: true,
};
}),
total,
};
},
getVoice: async (voiceId) => {
const custRes = await requestGet('GetCustomizedVoice',{ VoiceId: voiceId });
const item = get(custRes, 'data.Data.CustomizedVoice');
const kv = {
story: 'Story',
interaction: 'Interaction',
navigation: 'Navigation',
};
return {
desc: item.VoiceDesc || kv[item.Scenario] || item.Scenario,
voiceType: item.Gender === 'male' ? 'Male' : 'Female',
voiceUrl: item.VoiceUrl || '',
tag: item.VoiceDesc || item.Scenario,
voice: item.VoiceId,
name: item.VoiceName || item.VoiceId,
remark: item.Scenario,
demoMediaId: item.DemoAudioMediaId,
custom: true,
};
},
getDemo: async (mediaId) => {
const mediaInfo = await requestGet('GetMediaInfo',{ MediaId: mediaId });
const src = get(mediaInfo, 'data.MediaInfo.FileInfoList[0].FileBasicInfo.FileUrl');
return {
src: src,
};
},
},
].concat(commonItems);
return customItems;
})
// You can call the init method only after you specify the customVoiceGroups parameter.
window.AliyunVideoEditor.init({
...
customVoiceGroups:customVoiceGroups
...
}) Add the Public Materials menu
By default, the Public Materials menu is not displayed on the video editing page. To add this menu, specify the publicMaterials parameter. For more information, see the Demo code.
window.AliyunVideoEditor.init({
// Other parameters are omitted.
publicMaterials: {
getLists: async () => {
const resultPromise = [
{
bType: "bgm",
mediaType: "audio",
name: "Music",
},
{
bType: "bgi",
mediaType: "image",
styleType: "background",
name: "Background",
},
].map(async (item) => {
const res = await request("ListAllPublicMediaTags", {
BusinessType: item.bType,
});
const tagList = get(res, "data.MediaTagList");
return tagList.map((tag) => {
const tagName =
locale === "zh-CN"
? tag.MediaTagNameChinese
: tag.MediaTagNameEnglish;
return {
name: item.name,
key: item.bType,
mediaType: item.mediaType,
styleType: item.styleType,
tag: tagName,
getItems: async (pageNo, pageSize) => {
const itemRes = await request("ListPublicMediaBasicInfos", {
BusinessType: item.bType,
MediaTagId: tag.MediaTagId,
PageNo: pageNo,
PageSize: pageSize,
IncludeFileBasicInfo: true,
});
const total = get(itemRes, "data.TotalCount");
const items = get(itemRes, "data.MediaInfos", []);
const transItems = transMediaList(items);
return {
items: transItems,
end: pageNo * pageSize >= total,
};
},
};
});
});
const resultList = await Promise.all(resultPromise);
const result = resultList.flat();
return result;
},
},
});Asynchronously import media assets
type InputMedia = (InputVideo | InputAudio | InputImage) ;
interface InputSource {
sourceState?: 'ready' | 'loading' | 'fail'; // The state of the material. A value of loading indicates that the material is being loaded and cannot be added to the track. A value of fail indicates that the material has errors and cannot be added to the track. A value of ready indicates that the material can be previewed and added. Default state: ready.
}
// .... For more information, see the data structure in the API reference.
// If the media assets to be imported require asynchronous processing, such as transcoding and generating image sprites, you can set the state to loading.
// For example, the state is set to loading for the materials returned by the searchMedia method. The following section describes how to import a third-party media asset based on its URL.
searchMedia:async()=>{
// 1. Select a third-party media asset.
// 2. Call the RegisterMediaInfo method to register the third-party media asset with the media asset library and obtain the media asset ID.
// 3. Call the GetMediaInfo method to obtain the information about the registered third-party media asset and set its state to loading.
//.....
return [
{
mediaId: "https://xxxx.xxxxx.mp4",
mediaType: "video",
mediaIdType: "mediaURL",
sourceState: "loading",
video: {
title: "tettesttsete",
coverUrl:"https://xxxxxx.jpg",
duration: 10,
},
},
]
}
// Update the status of the third-party media asset after an image sprite is generated for it.
AliyunVideoEditor.updateProjectMaterials((old) => {
return old.map((item) => {
if (item.mediaId === mediaId) {
if ("video" in item) {
item.video.spriteConfig = {
num: "32",
lines: "10",
cols: "10",
};
item.video.sprites = [image]; // image indicates the URL of the generated sprite.
item.sourceState = "ready";
}
}
return item;
});
});Translate a video
Parameter | Module | Description |
translation | Video translation | This module is interconnected with the backend API of video translation and is used to translate videos. For more information, see SubmitVideoTranslationJob. |
detext | Subtitle erasure | This module is interconnected with the backend API of subtitle erasure and is used to separately erase video subtitles. For more information, see SubmitIProductionJob. |
captionExtraction | Subtitle extraction | This module is interconnected with the backend API of subtitle extraction and is used to separately extract video subtitles. For more information, see SubmitIProductionJob. |
Example
window.AliyunVideoEditor.init({
// Other parameters are omitted.
videoTranslation: {
translation: {
submitVideoTranslationJob: async (params) => {
// The temporary storage address. You can use the address based on your business requirements.
const tempFileStorageLocation = await getTempFileLocation();
if (!tempFileStorageLocation) {
return {
jobDone: false,
jobError: 'Specify a temporary storage address.'
};
}
const item = tempFileStorageLocation;
const path = item.Path;
if (params.editingConfig.SourceLanguage !== 'zh') {
return {
jobDone: false,
jobError: 'The source language can only be Chinese.'
};
}
if (params.type === 'Video') { // Translate video materials.
const storageType = item.StorageType;
let outputConfig = {
MediaURL: `https://${item.StorageLocation}/${path}videoTranslation-${params.mediaId}.mp4`,
};
if (storageType === 'vod_oss_bucket') {
outputConfig = {
OutputTarget: 'vod',
StorageLocation: get(item, 'StorageLocation'),
FileName: `videoTranslation-${params.mediaId}.mp4`,
TemplateGroupId: 'VOD_NO_TRANSCODE',
};
}
const res = await request("SubmitVideoTranslationJob",{
InputConfig: JSON.stringify({
Type: params.type,
Media: params.mediaId,
}),
OutputConfig: JSON.stringify(outputConfig),
EditingConfig: JSON.stringify(params.editingConfig),
});
return {
jobDone: false,
jobId: res.data.Data.JobId,
};
}
if (params.type === 'Text') {// Translate a subtitle.
const res = await request("SubmitVideoTranslationJob",{
InputConfig: JSON.stringify({
Type: params.type,
Text: params.text,
}),
EditingConfig: JSON.stringify(params.editingConfig),
});
return {
jobDone: false,
jobId: res.data.Data.JobId,
};
}
if (params.type === 'TextArray') {// Translate a subtitle array.
const res = await request("SubmitVideoTranslationJob",{
InputConfig: JSON.stringify({
Type: params.type,
TextArray: JSON.stringify(params.textArray),
}),
EditingConfig: JSON.stringify(params.editingConfig),
});
return {
jobDone: false,
jobId: res.data.Data.JobId,
};
}
return {
jobDone: false,
jobError: 'not match type',
};
},
getVideoTranslationJob: async (jobId) => {
const resp = await request("GetSmartHandleJob",{
JobId: jobId,
});
const res = resp.data;
if (res.State === 'Executing' || res.State === 'Created') {
return {
jobDone: false,
jobId,
};
}
if (res.State === 'Failed') {
return {
jobDone: true,
jobId,
jobError: 'Task execution failed'.
};
}
let isJobDone = true;
let text;
let textArray;
let timeline;
let jobError;
if (res.JobResult.AiResult) {
const aiResult = JSON.parse(res.JobResult.AiResult);
const projectId1 = aiResult.EditingProjectId;
if (projectId1) {
const projectRes = await request('GetEditingProject',{
ProjectId: projectId1,
RequestSource: 'WebSDK',
});
const timelineConvertStatus = get(projectRes, 'data.Project.TimelineConvertStatus');
if (timelineConvertStatus === 'ConvertFailed') {
jobError= 'Task execution failed'.
} else if (timelineConvertStatus === 'Converted') {
isJobDone = true;
} else {
isJobDone = false;
}
timeline = projectRes.data.Project.Timeline;
}
text = JSON.parse(res.JobResult.AiResult).TranslatedText;
textArray = JSON.parse(res.JobResult.AiResult).TranslatedTextArray;
}
return {
jobDone: isJobDone,
jobError,
jobId,
result: {
text,
textArray,
timeline,
},
};
},
},
detext: {
submitDetextJob: async ({ mediaId, mediaIdType, box }) => {
const tempFileStorageLocation = await getTempFileLocation();
if (!tempFileStorageLocation) {
return {
jobDone: false,
jobError: 'Specify a temporary storage address.'
};
}
const item = tempFileStorageLocation;
const path = item.Path;
const res = await request("SubmitIProductionJob",{
FunctionName: 'VideoDetext',
Input: JSON.stringify({
Type: mediaIdType === 'mediaURL' ? 'OSS' : 'Media',
Media: mediaId,
}),
Output: JSON.stringify({
Type: 'OSS',
Media: `https://${item.StorageLocation}/${path}VideoDetext-${mediaId}.mp4`,
}),
JobParams:
box && box !== 'auto'
? JSON.stringify({
Boxes: JSON.stringify(box),
})
: undefined,
});
return {
jobDone: false,
jobId: res.data.JobId,
};
},
getDetextJob: async (jobId) => {
const resp = await request("QueryIProductionJob",{ JobId: jobId });
const res = resp.data;
if (res.Status === 'Queuing' || res.Status === 'Analysing') {
return {
jobDone: false,
jobId,
};
}
if (res.Status === 'Fail') {
return {
jobDone: true,
jobId,
jobError: intl.get('job_error').d('Task execution failed'),
};
}
const mediaUrl = resp.data.Output.Media;
const mediaInfoRes = await request("GetMediaInfo",{ InputURL: mediaUrl });
if (mediaInfoRes.code !== '200') {
await request("RegisterMediaInfo",{ InputURL: mediaUrl });
return {
jobDone: false,
jobId,
};
}
const mediaStatus = get(mediaInfoRes, 'data.MediaInfo.MediaBasicInfo.Status');
let isError = false;
let isMediaReady = false;
let inputVideo;
if (mediaStatus === 'Normal') {
const transVideo = transMediaList([get(mediaInfoRes, 'data.MediaInfo')]);
inputVideo = transVideo[0];
isMediaReady = true;
if (!inputVideo) {
isError = true;
}
} else if (mediaStatus && mediaStatus.indexOf('Fail') >= 0) {
isError = true;
}
return {
jobDone: isMediaReady,
jobError: isError ? 'Task execution failed': undefined,
jobId: res.JobId,
result: {
video: inputVideo,
},
};
},
},
captionExtraction: {
submitCaptionExtractionJob: async ({ mediaId, mediaIdType, box }) => {
const tempFileStorageLocation = await getTempFileLocation();
if (!tempFileStorageLocation) {
return {
jobDone: false,
jobError: 'Specify a temporary storage address.'
};
}
const item = tempFileStorageLocation;
const path = item.Path;
let roi;
if (Array.isArray(box) && box.length > 0 && box[0] && box[0].length === 4) {
const [x, y, width, height] = box[0];
roi = [
[y, y + height],
[x, x + width],
];
}
const res = await request('SubmitIProductionJob',{
FunctionName: 'CaptionExtraction',
Input: JSON.stringify({
Type: mediaIdType === 'mediaURL' ? 'OSS' : 'Media',
Media: mediaId,
}),
Output: JSON.stringify({
Type: 'OSS',
Media: `https://${item.StorageLocation}/${path}CaptionExtraction-${mediaId}.srt`,
}),
JobParams:
box && box !== 'auto'
? JSON.stringify({
roi: roi,
})
: undefined,
});
return {
jobDone: false,
jobId: res.data.JobId,
};
},
getCaptionExtractionJob: async (jobId) => {
const resp = await request('QueryIProductionJob',{ JobId: jobId });
const res = resp.data;
if (res.Status === 'Queuing' || res.Status === 'Analysing') {
return {
jobDone: false,
jobId,
};
}
if (res.Status === 'Fail') {
return {
jobDone: true,
jobId,
jobError: 'Task execution failed'.
};
}
const mediaUrl = resp.data.OutputUrls[0];
const srtRes = await fetch(mediaUrl.replace('http:', ''));
const srtText = await srtRes.text();
return {
jobDone: true,
jobId: res.JobId,
result: {
srtContent: srtText,
},
};
},
},
}
});