ModelScope evaluation
ModelScope aims to create a next-generation open-source model-as-a-service sharing platform, providing pan-AI developers with flexible, easy-to-use, low-cost one-stop model service products, making model application easier!
ModelScope provides model libraries, datasets and documents. With the ModelScope Library environment, we can use open models to apply the models to practice; we can also perform tuning tests on pre-trained models; we can also use documents Learn how the learning model is implemented.
Of course, I am definitely not an artificial intelligence developer, a pure novice, but I should also take the opportunity to write a review article to have a good time with the ModelScope model library
Foreword:
In this Internet age where the amount of information is exploding, various video platforms have become one of our entertainment and pastimes. Of course, I myself like to watch videos on my mobile phone every day. Similarly, occasionally some video works are created and uploaded to the platform. In the platform's creation center, when we upload our own videos to the platform, we often classify and partition the videos, which are classified in different ways on different platforms. Some platforms need to manually select the category of the video by themselves, while some platforms will automatically help us choose a good category, but the results of automatic classification are often not as good as our intentions.
In a domestic bullet screen video website, the partition selected when uploading a video is automatically selected by the platform. Many basic system platforms for videos will help me fill in with one click. Here, the video we upload and test is an anime fan drama. The platform Animated partitions are automatically selected, no problems
But when we rename the same video file to another name, such as test in the picture, and upload it again
The automatic partition of the platform is no longer accurate, but becomes the computer technology of the technology zone
Although we are not engaged in development, we really do not understand which technology it uses to achieve automatic partitioning of uploaded videos, which may be related to file name resolution. But what is certain is that there is a big error in the automatic partitioning of the video when uploading.
Returning to the topic of our model evaluation, such a model is provided in the modelscope model library: Dharma video classification model-cv_resnet50_video-category
Model description: The model uses the resnet50 network structure to extract visual features, and uses the NextVLAD network to perform feature aggregation on consecutive video frames.
How to use: Direct inference, directly infer the input video clip and input video url.
Usage scenarios: Suitable for short videos with clear themes, the videos should not exceed 30 seconds.
How to use: Provide the input video, i.e. the result can be identified with a simple Pipeline call.
When using the model, we need the ModelScope environment. The platform provides us with an installed online environment for us to test. Of course, we can also build the ModelScope library locally. The official provides detailed usage documents, but I will briefly sort out the use of the online environment and the local environment here.
1. Online test using video classification model
1.1 Online Notebook Experimental Environment
Go to the module page of the experiment we need, and click [Open in Notebook] in the upper right corner of the page
Choose to start the instance, which can be CPU or GPU. The CPU environment is completely free and can be used for 4 hours at a time. Naturally, it is free
Select [Method 1], click [Start], wait for a few minutes, then [View Notebook], jump to the online test platform
However, the free environment here has 8 cores and 32G, which is much higher than my own ECS configuration. It's really a comfortable test. Wow
On the startup page, click [Python3] to enter the code debugging environment, click [Terminal] to enter the linux terminal, and you can use pip to install some other required libraries. Of course, the notebook online environment is pre-installed with the ModelScope Library, and there is no problem with testing.
1.2 Online environment test model
We enter the code example in the debug box to test, and click Run.
from modelscope.pipelines import pipeline
from modelscope.utils.constant import Tasks
category_pipeline = pipeline(
Tasks.video_category, model='damo/cv_resnet50_video-category')
result = category_pipeline('1.mp4')
print(result)
Of course, we need to upload a sample of the test video classification. I have uploaded a video of domestic animation here, just drag it to the notebook.
The model accurately inferred the classification of the video I passed, returned the test results, and classified it to [Game >> Short Animation]
{ 'scores': [0.38548532128334045], 'labels': ['Game>>Short animation']}
2. Build a simple local short video classification application
2.1 Local Notebook Experimental Environment
Use the anaconda environment to create a python37 environment under the local dragon lizard system (Anolis is recommended to replace CentOS)
//Anaconda creates a python37 environment named modelscope
[root@k8s ~]# conda create -n modelscope python=3.7
//Enter the created modelscope, install the deep learning framework, install all domain functions of the ModelScope library (can also be installed separately)
[root@k8s bin]# source activate modelscope
(modelscope) [root@k8s bin]# pip install torch torchvision torchaudio -i https://pypi.tuna.tsinghua.edu.cn/simple
(modelscope) [root@k8s bin]# pip install --upgrade tensorflow -i https://pypi.tuna.tsinghua.edu.cn/simple
(modelscope) [root@k8s bin]# pip install "modelscope[audio,cv,nlp,multi-modal]" -fhttps://modelscope.oss-cn-beijing.aliyuncs.com/releases/repo.html
//After a long download, installation and compilation, enter python interaction and enter commands to test whether the environment is normal
>>> from modelscope.pipelines import pipeline
>>> p = pipeline('video-category', model='damo/cv_resnet50_video-category')
//The local test model, the same sample is executed successfully, and the result of the correct classification is returned
>>> result = category_pipeline('/root/1.mp4')
>>> print(result)
ModelScope provides model libraries, datasets and documents. With the ModelScope Library environment, we can use open models to apply the models to practice; we can also perform tuning tests on pre-trained models; we can also use documents Learn how the learning model is implemented.
Of course, I am definitely not an artificial intelligence developer, a pure novice, but I should also take the opportunity to write a review article to have a good time with the ModelScope model library
Foreword:
In this Internet age where the amount of information is exploding, various video platforms have become one of our entertainment and pastimes. Of course, I myself like to watch videos on my mobile phone every day. Similarly, occasionally some video works are created and uploaded to the platform. In the platform's creation center, when we upload our own videos to the platform, we often classify and partition the videos, which are classified in different ways on different platforms. Some platforms need to manually select the category of the video by themselves, while some platforms will automatically help us choose a good category, but the results of automatic classification are often not as good as our intentions.
In a domestic bullet screen video website, the partition selected when uploading a video is automatically selected by the platform. Many basic system platforms for videos will help me fill in with one click. Here, the video we upload and test is an anime fan drama. The platform Animated partitions are automatically selected, no problems
But when we rename the same video file to another name, such as test in the picture, and upload it again
The automatic partition of the platform is no longer accurate, but becomes the computer technology of the technology zone
Although we are not engaged in development, we really do not understand which technology it uses to achieve automatic partitioning of uploaded videos, which may be related to file name resolution. But what is certain is that there is a big error in the automatic partitioning of the video when uploading.
Returning to the topic of our model evaluation, such a model is provided in the modelscope model library: Dharma video classification model-cv_resnet50_video-category
Model description: The model uses the resnet50 network structure to extract visual features, and uses the NextVLAD network to perform feature aggregation on consecutive video frames.
How to use: Direct inference, directly infer the input video clip and input video url.
Usage scenarios: Suitable for short videos with clear themes, the videos should not exceed 30 seconds.
How to use: Provide the input video, i.e. the result can be identified with a simple Pipeline call.
When using the model, we need the ModelScope environment. The platform provides us with an installed online environment for us to test. Of course, we can also build the ModelScope library locally. The official provides detailed usage documents, but I will briefly sort out the use of the online environment and the local environment here.
1. Online test using video classification model
1.1 Online Notebook Experimental Environment
Go to the module page of the experiment we need, and click [Open in Notebook] in the upper right corner of the page
Choose to start the instance, which can be CPU or GPU. The CPU environment is completely free and can be used for 4 hours at a time. Naturally, it is free
Select [Method 1], click [Start], wait for a few minutes, then [View Notebook], jump to the online test platform
However, the free environment here has 8 cores and 32G, which is much higher than my own ECS configuration. It's really a comfortable test. Wow
On the startup page, click [Python3] to enter the code debugging environment, click [Terminal] to enter the linux terminal, and you can use pip to install some other required libraries. Of course, the notebook online environment is pre-installed with the ModelScope Library, and there is no problem with testing.
1.2 Online environment test model
We enter the code example in the debug box to test, and click Run.
from modelscope.pipelines import pipeline
from modelscope.utils.constant import Tasks
category_pipeline = pipeline(
Tasks.video_category, model='damo/cv_resnet50_video-category')
result = category_pipeline('1.mp4')
print(result)
Of course, we need to upload a sample of the test video classification. I have uploaded a video of domestic animation here, just drag it to the notebook.
The model accurately inferred the classification of the video I passed, returned the test results, and classified it to [Game >> Short Animation]
{ 'scores': [0.38548532128334045], 'labels': ['Game>>Short animation']}
2. Build a simple local short video classification application
2.1 Local Notebook Experimental Environment
Use the anaconda environment to create a python37 environment under the local dragon lizard system (Anolis is recommended to replace CentOS)
//Anaconda creates a python37 environment named modelscope
[root@k8s ~]# conda create -n modelscope python=3.7
//Enter the created modelscope, install the deep learning framework, install all domain functions of the ModelScope library (can also be installed separately)
[root@k8s bin]# source activate modelscope
(modelscope) [root@k8s bin]# pip install torch torchvision torchaudio -i https://pypi.tuna.tsinghua.edu.cn/simple
(modelscope) [root@k8s bin]# pip install --upgrade tensorflow -i https://pypi.tuna.tsinghua.edu.cn/simple
(modelscope) [root@k8s bin]# pip install "modelscope[audio,cv,nlp,multi-modal]" -fhttps://modelscope.oss-cn-beijing.aliyuncs.com/releases/repo.html
//After a long download, installation and compilation, enter python interaction and enter commands to test whether the environment is normal
>>> from modelscope.pipelines import pipeline
>>> p = pipeline('video-category', model='damo/cv_resnet50_video-category')
//The local test model, the same sample is executed successfully, and the result of the correct classification is returned
>>> result = category_pipeline('/root/1.mp4')
>>> print(result)
Related Articles
-
A detailed explanation of Hadoop core architecture HDFS
Knowledge Base Team
-
What Does IOT Mean
Knowledge Base Team
-
6 Optional Technologies for Data Storage
Knowledge Base Team
-
What Is Blockchain Technology
Knowledge Base Team
Explore More Special Offers
-
Short Message Service(SMS) & Mail Service
50,000 email package starts as low as USD 1.99, 120 short messages start at only USD 1.00