The cloud-native AI console provides a platform for data scientists to perform machine learning tasks, such as data inspection, data preprocessing, model building, model analysis, model testing, and model publishing. Jupyter Notebook is a development utility for machine learning processes. Container Service for Kubernetes (ACK) integrates Jupyter Notebook with the AI console and the development environment. ACK also provides an authorization mechanism for you to manage Jupyter Notebook. This topic describes how to create and use a Jupyter notebook in the AI development console.
Prerequisites
- Create a professional managed Kubernetes cluster.
- The ack-ai-dev-console and ack-ai-installer of the cloud-native AI component set are installed in the professional Kubernetes cluster. The cluster must run Kubernetes 1.18 or later.
- A Resource Access Management (RAM) user is created by the cluster administrator. Required quota groups are added and associated with the RAM user. For more information, see Create a RAM user and Step 1: Add a quota group and associate the quota group with the RAM user.
- Training data is prepared. For more information, see Configure datasets and source code repositories for a training job.
- Persistent volume claims (PVCs) are created. For more information, see Use a NAS file system as a statically provisioned volume in the ACK console or Mount an OSS bucket as a statically provisioned volume in the ACK console.
Note In most cases, data used to train models is stored in Object Storage Service (OSS) volumes or Apsara File Storage NAS (NAS) volumes.
Introduction to Jupyter Notebook
- Provides a machine learning experiment environment that can be integrated into a cloud platform. This environment allows you to develop machine learning on your on-premises machine.
- Provides a tool to copy and process datasets. This allows you to submit processed data to persistent storage. You can also process datasets by using big data tools.
- Provides an environment for machine learning processes, such as testing and preprocessing.
In the environment, you can write the code of machine learning tasks, package the
code into
Docker images
, and push the images to your cluster or remote Docker repositories.
Step 1: Create a Jupyter notebook
Step 2: Use a Jupyter notebook
Create a custom notebook image
To meet various requirements of different users, ACK allows you to specify a custom notebook image, as shown in the following figure.

You can perform the following steps to create and package a custom image by using a Dockerfile:
FROM tensorflow/tensorflow:1.15.5-gpu
USER root
RUN pip install jupyter && \
pip install ipywidgets && \
jupyter nbextension enable --py widgetsnbextension && \
pip install jupyterlab && jupyter serverextension enable --py jupyterlab
# You can use other methods to install JupyterLab. You must expose the service through port 8888.
EXPOSE 8888
USER jovyan
CMD ["sh", "-c", "jupyter-lab --notebook-dir=/home/jovyan --ip=0.0.0.0 --no-browser --allow-root --port=8888 --ServerApp.token='' --ServerApp.password='' --ServerApp.allow_origin='*' --ServerApp.base_url=${NB_PREFIX} --ServerApp.authenticate_prometheus=False"]
USER root
The following table describes the parameters.
Parameter | Description |
---|---|
--notebook-dir | The working directory to be started. Default value: /home/jovyan . This is the JupyterLab official setting.
|
--ip | The IP address that the Jupyter service listens on. Default value: 0.0.0.0 . This allows access from external IP addresses.
|
--no-browser | Specifies that no browser is used. This is supported only by the Linux server environment. |
--port | The port that is listened on for external requests. This parameter is required and
must be set to 8888 . This is the JupyterLab official setting.
|
--ServerApp.token | The custom logon token . By default, this parameter is empty. If you set this parameter, you must notify
the users.
|
--ServerApp.password | The custom logon password. By default, this parameter is an empty string. If you set this parameter, you must notify the users. |
--ServerApp.base_url | The path from which the JupyterLab service starts. The path is required for routing.
Therefore, the value is derived from the NB_PREFIX environment variable. This parameter is required and must be set to ${NB_PREFIX} .
|
--ServerApp.allow_origin | The source IP group that can be accessed by JupyterLab. Default value: '*' . This indicates that all source IP groups can be accessed by JupyterLab.
|
--ServerApp.authenticate_prometheus | Specifies whether to enable authentication when Jupyter accesses Prometheus metrics.
Default value: False . This indicates that the authentication is disabled. You can enable the authentication
based on your business requirements.
|