×
Community Blog An Introduction to Core Machine Learning

An Introduction to Core Machine Learning

In this article, we will learn about the Core ML framework and briefly discuss how it can be used to complement machine learning applications deployed on Alibaba Cloud.

By Alex Mungai Muchiri, Alibaba Cloud Tech Share Author. Tech Share is Alibaba Cloud's incentive program to encourage the sharing of technical knowledge and best practices within the cloud community.

There's a lot of excitement in the data science community right now as more tech companies compete to release more products in the area. If you were keen during the iPhone X launch, you would have noticed some of the cool features that came with the gadget such as FaceID, Animoji, Augmented Reality (AR). These tools use machine learning frameworks to work. If you are data scientist like myself, you are probably wondering how to build such systems.

Enter Core ML, a machine learning framework from Apple to the developer community. It is compatible to all Apple products from iPhone, Apple TV to Apple watches. Furthermore, the new A11 Bionic processing chip incorporates a neural engine for advanced machine learning capabilities. It is a custom GPU in all the latest iPhone models shipped by Apple. The new tool opens up a whole new era of machine learning possibilities with Apple. It is something that is bound to spur creativity, innovation and productivity. Since Core ML is so significant, we shall evaluate what it is and why it is becoming so important. We shall also see how to implement the model and evaluate its merits and demerits.

What Is CoreML?

Simply put, the Core Machine Learning Framework enables developers to integrate their machine learning models into iOS applications. The underlying technologies powering Core ML are both CPU and GPU. Notably, the machine models run on respective devices allowing local analysis of data. The methods in use include both 'Metal' and 'Accelerate' techniques. In most cases, locally run machine learning models are limited in both complexity and productivity as opposed to cloud-based tools.

Previous Frameworks by Apple

Apple has previously created machine learning frameworks for its devices before. The most notable are two libraries that were released including:

  1. Accelerate and Basic Neural Network Subroutines (BNNS)

    It uses Convolutional Neural Networks to make efficient CPU predictions.

  2. Metal Performance Shaders CNN (MPSCNN)

    It uses Convolutional Neural Networks to make efficient GPU predictions.


The two frameworks were distinct in their optimizations, one for CPU and the other for GPU. Notably, inference with the CPU is always faster than GPU. On the other hand, training on the GPU is faster. Most developers found, and still find, these frameworks confusing to work with. Furthermore, they were not easily programmable due to the close association with the hardware.

Core Machine Learning Architecture

1
Source: www.apple.com

The two previous libraries are still in place; however, the Core ML framework is another top layer abstraction over them. Its interface is easier to work with and has a similar efficiency. When using the new tool, you do not need to worry about switching between CPU and GPU for inference and training. The CPU deals with memory-intensive workloads such as natural language processing. The GPU handles computation intensive workloads such as image processing tasks and identification. The context switching process of Core ML handles these functionalities with ease, and also takes care of the specific needs of your app depending on its application.

2
Source: www.apple.com

Core ML capabilities

There are three libraries that are associated with Core ML that form part of its functionality:

  1. Vision: identification of faces, detection of features, or classification of image and video scenes in the framework is achieved through this library. The vision library is based on computer vision techniques and high-performance image processing.
  2. Foundation (NLP): the library incorporates the tools to enable natural language processing in iOS apps.
  3. Gameplay Kit: the kit uses decision trees for game development purposes and for artificial intelligence requirements.

Apple has done a great deal of work in making the above libraries easy to interface and operationalise with your apps. Putting the above libraries into the Core ML architecture gets us a new structure as below:

3
Source: www.apple.com

Integration of Core ML into the structure contributes to a modular and better scalable iOS application. Since there are numerous layers, it is possible to use each one of them in any number of ways. For more information about these libraries, these links contain further information: Vision, Foundation and GameplayKit. Alright, now let us learn some basic practical things:

How to Set Up the System

The following are the requirements for setting up a simple Core ML project:

  1. The Operating System: MacOS (Sierra 10.12 or above)
  2. Programming Language: python for mac (Python 2.7) and pip. Install pip using the command line below:
    sudo easy_install pip

  3. Coremltools: the package will be the one to convert machine models written in python to a format that is understood by the Core ML framework. Install this tool to your machine by running the command line below:
    sudo pip install -U coremltools

  4. Xcode 9: the Xcode 9 is the default platform on which iOS applications are built, and is accessible here. Login using your Apple ID to download Xcode.

Verify your identity after login using Apple ID by confirming the notification on the Apple device.

"Allow" the process, and type the given 6-digit passcode in the website

After you verify your identity using the six-digit code, you will get a link to download Xcode. Now, we are going to look at how to convert your trained machine learning models to Core ML standard.

Converting Trained Models to Core ML

The process of converting trained machine models transforms them into a format that is compatible with Core ML. the Core ML tools is the tool that is provided by Apple for that particular purposes. However, there are other tools available from third parties such as MXNet converter or the TensorFlow converter, which work pretty well. It is also possible to build your own tool if you follow the Core ML standards.

Using Core ML Tools

This tool, composed in Python, has a wide range of applicable model types that it converts to a format that is understood by Core ML. Apple provides a list of models and third-party frameworks supported in the table below.

Third party frameworks and ML models compatible to Core ML Tools:

4

Converting ML Models

Your ML model is recognised as a third-party framework under the standard indicated above. To execute the conversion process, the convert method is used. The resulting model should be saved as (.mlmodel) which is the Core ML model format. For models created using Caffe, the (.caffemodel), which is the Caffe model, should be passed to the coremltools.converters.caffe.convert method. Use the method below:

import coremltools
coreml_model = coremltools.converters.caffe.convert('my_caffe_model.caffemodel')

The result after conversion should now be saved in the Core ML format:

coremltools.utils.save_spec(coreml_model, 'my_model.mlmodel')

In some model types, you may have to include additional information regarding the update inputs, outputs, and labels. In other cases, you may have to declare image names, types, and formats. All conversion tools have other documentation and outlined information, with each varying with each tool. Core ML includes a Package Documentation with further information.

Using a Custom Conversion Tool

If your model is not among those supported by the Core ML tool, you can create your own model. The process entails translating your model's parameters such as input, output, and architecture to the Core ML standard. All layers of the model's architecture has to be defined and how it connects to other layers. The Core ML Tools have examples on how to make these conversions and also demonstrate conversion of model types of third party frameworks to Core ML format.

Training the model

There are numerous ways of training a machine learning model and in our case, we shall use MXNet. It is an acceleration library that enables the creation of large-scale deep-neural networks and mathematical computations. The library is desirable for the following reasons:

  1. Device Placement: Specify where data structures should live
  2. Multi-GPU training: MXNet allows scaling of computation-intensive workloads through GPUs
  3. Automatic differentiation: MXNet enables derivative calculations for neural network interpretations
  4. Predefined Layers: Includes pre-defined layers that are efficient for spped and performance

Install MXNet and the converter

You will need to run the converter on macOS El Capitan (10.11) and later versions. But before that, install Python 2.7.

The command below installs the MXNet framework and the conversion tool:

pip install mxnet-to-coreml

Converting MXNet models

Once our tools have been installed, we can then proceed to convert models trained using MXNet and apply them to CoreML. For instance, we are using a simple model that detects images and attempts to determine the location.

All MXNet models are comprised of two parts:

  1. Model definition in JSON format
  2. Parameters in a binary file

Our simple location detection model would contain three files namely: model definition (JSON), parameters (binary) and a text file (geographic cells). When the Google's S2 Geometry Library is applied for training, the text file would contain three fields like so:

Google S2 Token, Latitude, Longitude (e.g., 8644b554 29.1835189632 -96.8277835622). The iOS app would only require the coordinate information.

Once everything is all set up, run the command below:

mxnet_coreml_converter.py --model-prefix='RN101-5k500' --epoch=12 --input-shape='{"data":"3,224,224"}' --mode=classifier --pre-processing-arguments='{"image_input_names":"data"}' --class-labels grids.txt --output-file="RN1015k500.mlmodel"

The converter does its magic and recreates the MXNet model to a CoreML equivalent and a SUCCESS confirmation should be generated. From there, import the file you have generated to your XCode project.

5

The next process requires that you have XCode running on your computer. You should see the properties of your newly converted file such as size, name or even parameters as it would apply within your Swift code that is, when clicked.

Configure Your Code for iOS

In your Xcode, drag and drop the file and tick on the Target Membership checkbox. Next, test the application in a physical device or alternatively use the Xcode simulator. Keep in mind that you need to use your Team account to sign the app for it to be used on a physical device. Finally, build the app and run it on a device. That's it!

Integrating Core ML to Your App

Getting started with Core ML is as easy as integrating it with your mobile application. However, trained models can take up a large chunk of your device's storage. You can solve the problem for neural networks by reducing your parameters' weight. In non-neural networks, this could be overcome by reducing the application's size, storing the models in the cloud, using a function to call the cloud to download learning models instead of having them bundled in the app. Some developers also use half precision in non-neural networks. The conversion does well in reducing the network's size when the weights being connected reduce. However, the half precision technique reduces the floating point's accuracy as well as the range of values.

Advantages of Core ML

  1. Highly optimized for performance on the device
  2. The on-device implementation ensures data privacy
  3. Can make predictions without the need for an internet connection
  4. Cam import models from a cloud platform
  5. Models can automatically run between CPU and GPU
  6. Compatible with most of the popular learning models

6

Disadvantages of Core ML

  1. There cannot be training on device, just inferences
  2. Only supports supervised models
  3. Only supported layers can be used
  4. You only have access to the predictions, not the output from the different layers
  5. Currently supports regression and classification only

Conclusion

You can use Alibaba Cloud to train and optimize the models at scale before exporting them to devices. Core ML simplifies the process of integrating machine learning into applications built on iOS. While the tool is available on the device, there is a huge opportunity for using the cloud as a platform to combine output from various devices and conducting massive analytics and machine learning optimisations.

Resources

https://developer.apple.com/documentation/coreml/converting_trained_models_to_core_ml

https://www.Python.org

1 1 1
Share on

Alibaba Clouder

2,605 posts | 747 followers

You may also like

Comments

Raja_KT February 16, 2019 at 6:32 am

Interesting one. JFMI, do all NLP algos are CPU-bound?

Alibaba Clouder

2,605 posts | 747 followers

Related Products

  • Platform For AI

    A platform that provides enterprise-level data modeling services based on machine learning algorithms to quickly meet your needs for data-driven operations.

    Learn More
  • Epidemic Prediction Solution

    This technology can be used to predict the spread of COVID-19 and help decision makers evaluate the impact of various prevention and control measures on the development of the epidemic.

    Learn More
  • Machine Translation

    Relying on Alibaba's leading natural language processing and deep learning technology.

    Learn More
  • Function Compute

    Alibaba Cloud Function Compute is a fully-managed event-driven compute service. It allows you to focus on writing and uploading code without the need to manage infrastructure such as servers.

    Learn More