Community Blog Building a High-Level Frontend Machine Learning Framework Based on the tfjs-node

Building a High-Level Frontend Machine Learning Framework Based on the tfjs-node

This article describes how Pipcook is integrated with TensorFlow and how the underlying models of tfjs-node are used to build a machine learning pipeline.

By Queyue

With the development of deep learning, all areas of our lives are undergoing intelligent transformations. As the team positioned closest to users, the frontend also wants to use AI capabilities to improve our efficiency, reduce labor costs, and provide users with a better experience. Intelligent transformation is seen as an important area of growth for the future of the frontend field.

However, the following issues hinder the adoption of intelligent development by the frontend team:

  • Algorithm engineers that are familiar with machine learning do not understand frontend businesses or the data accumulated at the frontend and its potential value. Therefore, it is difficult for them to participate in intelligent frontend development.
  • Traditional frontend engineers do not understand the common languages used in machine learning, such as Python and C++. As a result, language learning and conversion costs are high.
  • Traditional frontend engineers do not understand the algorithms and principles of deep learning. It is difficult for them to use existing machine learning frameworks, such as TensorFlow and PyTorch, to train models.

To solve these problems and promote intelligent frontend development, we developed Pipcook. Pipcook uses the frontend-friendly JavaScript (JS) environment, adopts the TensorFlow.js framework for its underlying algorithm capabilities, and encapsulates algorithms for frontend business scenarios. This allows frontend engineers to quickly and easily use machine learning capabilities.

This article describes how Pipcook is integrated with TensorFlow.js and how the underlying models and computing capabilities of tfjs-node are used to build a high-level machine learning pipeline. For more information about Pipcook, visit the official Github repository at: https://github.com/alibaba/pipcook

Why Use TensorFlow.js as the Underlying Algorithm Framework

TensorFlow.js is a JS-based machine learning framework released by Google in 2018. Google has since made the relevant code open source. Pipcook uses tfjs-node as the underlying framework for data processing and model training, develops plugins on TensorFlow.js, and assembles the plugins into a pipeline. We use TensorFlow.js for the following reasons:

  • Pipcook is designed to serve frontend engineers and was developed using JS. Therefore, we prefer a JS-based computing framework to prevent performance loss or errors caused by bridging other languages.
  • In contrast to other JS-based machine learning frameworks, TensorFlow is commonly used in C++ and Python. TensorFlow.js also reuses C++ underlying capabilities and operators to support a large number of network layers, activation functions, optimizers, and other components. In addition, it provides good performance and supports GPUs.
  • Tools, such as tfjs-converter, are provided to convert SavedModel or Keras models into JS models. In this way, many mature Python models can be reused.
  • JS does not have sophisticated mathematical capabilities or a scientific computing library like NumPy. Some similar libraries are difficult to seamlessly integrate with other computing frameworks. TensorFlow.js provides tensor encapsulation, which is equivalent to the NumPy array, and the TensorFlow.js model allows for high-performance tensor training.
  • TensorFlow.js provides dataset APIs to abstract data, encapsulate simple and efficient interfaces for data, and support batch data processing. Data flows of dataset APIs can be efficiently combined with Pipcook pipelines.

Data Processing With TensorFlow.js

To conduct machine learning, we need to access and process a large amount of data. In some traditional scenarios with small data volumes, we can read data to the memory at one time. However, in deep learning scenarios, the data volume generally exceeds the memory size. Therefore, we need to access partial data from the data source as needed. Dataset APIs provided by TensorFlow.js can encapsulate data in these scenarios.


In a standard Pipcook pipeline, we will use dataset APIs to encapsulate and process data. The preceding figure shows a typical data flow process.

  • First, the data collect plugin reads original training data to the pipeline. The original data may be local files or data stored in the cloud.
  • Then, the data collect plugin determines the data format and encapsulates it to the corresponding tensor.
  • The data access plugin accesses the data and encapsulates the tensor to tf.dataset to facilitate subsequent batch data processing and training.
  • The data process plugin processes data, including shuffling and augment operations. These operations will use operators for dataset encapsulation, such as map, to batch process data in the data flow.
  • The model load plugin reads data in batches to the model for training.

We can regard the dataset as a group of iterative training data, like Stream in Node.js. Each time the next element is requested from the dataset, the internal implementation accesses data as required and executes the preset data processing function. This abstraction allows the model to train a large amount of data easily. When we have multiple datasets, they can be shared and organized as a single group for abstraction.

Model Training


TensorFlow.js provides low-level and high-level APIs. Low-level APIs are derived from deeplearn.js and include operators required for building models. They process mathematical operations in machine learning, such as simple linear algebra data operations. High-level layers APIs encapsulate common machine learning algorithms and allow us to load trained models, such as Keras models.

Pipcook uses plugins to develop and run models. Each model load plugin loads a specific model, and most models are implemented based on TensorFlow.js. tfjs-node also provides features to accelerate model training, such as GPU acceleration. Due to the ecosystem and other current conditions, it is expensive to implement certain models in TensorFlow.js currently. To solve this problem, Pipcook provides Python bridging and other methods to allow you to call Python to train models in the JS runtime environment. We will describe the bridging details in subsequent articles.


An industrial-level machine learning pipeline needs a method to deploy your model after training. This way, the model can serve real businesses. Currently, Pipcook provides the following deployment solutions, which can be implemented using the model deploy plugin.

  • Quick Verification: You may want to quickly test your data and model with a small amount of data and a few epochs. In this scenario, we do not need to perform verification after deploying the model to the remote end. Pipcook is embedded with a local deploy plugin. After the model is trained, Pipcook will locally start a prediction server to provide the prediction service.
  • Docker Images: Pipcook provides official images that contain the environments required for training and prediction. You can deploy an image to your deployment host or use a Kubernetes cluster or other solution to manage Docker images.
  • Cloud Service Integration: Pipcook will integrate machine learning deployment services provided by different cloud service providers. In the current phase, Google Cloud has integrated TensorFlow.js with automated machine learning (AutoML). Pipcook will support Alibaba Cloud, AWS, and other services in the future.

Comparison With TFX

Our ultimate goal is a mature and industrial-level machine learning pipeline that can apply excellent models to a production environment. To achieve the same goal, Google has released the open-source product TensorFlow Extended (TFX) based on its practices. You may wonder if Pipcook is any different from TFX. Pipcook is not designed to replace any other frameworks, especially products based on the Python ecosystem. Pipcook aims to promote intelligent frontend development. Therefore, Pipcook uses technology stacks and product-based methods oriented to the frontend.

  • TFX uses the directed acyclic graph (DAG) method because it involves multiple operations, such as data generation, statistical analysis, data verification, and data conversion. These operations can be freely combined. However, most frontend scenarios do not involve complex combined operations. Therefore, Pipcook uses the pipeline method to abstract data operations to simple plugins in the pipeline, which reduces the demands placed on frontend engineers.
  • TFX uses Apache Airflow for scheduling, while Pipcook uses the frontend tech stack for such operations. For example, we use responsive frameworks, such as Rxjs, to respond to and connect different plugins in serial mode, making it easy for frontend personnel to understand and contribute code.
  • In addition, Pipcook provides JS-based APIs, which help frontend personnel reduce their learning and use costs.

Based on the preceding design, we are attempting to build a frontend-friendly machine learning environment to meet our expectations and goals.


Pipcook has been open source for about a month. In this period, we have received some user feedback. We hope to leverage the capabilities of the open-source community to optimize Pipcook so it can promote intelligent frontend development. To further develop Pipcook, we plan to:

  • Cooperate with cloud service providers, such as Alibaba Cloud, AWS, and Google Cloud, to establish machine learning links between Pipcook and different cloud services
  • Optimize the ecosystem and provide Pipcook trials to make it easier for users to get started
  • Support distributed training
  • Provide diversified plugins, optimize models, and support more pipelines.

In the future, we hope to combine the power of Alibaba°Øs intelligent frontend team and the entire open-source community to continuously optimize Pipcook and the push for intelligent frontend capabilities it represents. This way, we can provide inclusive technical solutions for intelligent frontend capabilities, accumulate more competitive samples and models, provide intelligent code generation services with higher accuracy and availability, and improve frontend R&D efficiency. In addition, frontend engineers will no longer have to do simple and repetitive work, giving them more time to focus on challenging work.

0 0 0
Share on

Alibaba F(x) Team

66 posts | 3 followers

You may also like


Alibaba F(x) Team

66 posts | 3 followers

Related Products