Getting Started

Core ML Tools can convert trained models from other frameworks into an in-memory representation of the Core ML model. This example demonstrates how to convert an image classifier model trained using TensorFlow's Keras API to the Core ML format.

During the course of this example you will learn the following:

📘

Note

The sample model was trained on the ImageNet dataset. It can classify images from a set of 1000 predefined image categories.

In this example, you will do the following:

Prerequisites

To run this example, you need version 2.2.0 of TensorFlow and version 2.10.0 of H5PY. Use the following commands to install them:

pip install tensorflow==2.2.0 h5py==2.10.0 coremltools pillow

Download the Model

Download the previously trained MobileNetV2 model, which is based on the tensorflow.keras.applications API:

import tensorflow as tf # TF 2.2.0

# Download MobileNetv2 (using tf.keras)
keras_model = tf.keras.applications.MobileNetV2(
    weights="imagenet", 
    input_shape=(224, 224, 3,),
    classes=1000,
)

Retrieve the class labels from a separate file, as shown in the example below. These class labels are required for your application to understand what the predictions refer to, and are often provided separately in popular machine learning frameworks.

# Download class labels (from a separate file)
import urllib
label_url = 'https://storage.googleapis.com/download.tensorflow.org/data/ImageNetLabels.txt'
class_labels = urllib.request.urlopen(label_url).read().splitlines()
class_labels = class_labels[1:] # remove the first class which is background
assert len(class_labels) == 1000

# make sure entries of class_labels are strings
for i, label in enumerate(class_labels):
  if isinstance(label, bytes):
    class_labels[i] = label.decode("utf8")

Now that the model is loaded and the class labels are collected, you can convert the model to Core ML.

📘

Tip

Before converting the model, a good practice for improving device performance is to know in advance the types and shapes, and to consider the model's interface, including the names and types of inputs and outputs. For this example, the application is an image classifier, and therefore the model's input is an ImageType of a specified size. When using inputs of an ImageType, it is also important to find out how the model expects its input to be preprocessed or normalized.

Convert the Model

Use the following code to convert the model to Core ML with an image as the input, and the class labels baked into the model:

import coremltools as ct

# Define the input type as image, 
# set pre-processing parameters to normalize the image 
# to have its values in the interval [-1,1] 
# as expected by the mobilenet model
image_input = ct.ImageType(shape=(1, 224, 224, 3,),
                           bias=[-1,-1,-1], scale=1/127)

# set class labels
classifier_config = ct.ClassifierConfig(class_labels)

# Convert the model using the Unified Conversion API to an ML Program
model = ct.convert(
    keras_model, 
    convert_to="mlprogram",
    inputs=[image_input], 
    classifier_config=classifier_config,
)

The Unified Conversion API convert() method in the previous example produces an ML program model. The convert_to parameter specifies the mlprogram model type.

If you exclude the convert_to parameter, the method produces a neural network with the default neuralnetwork model type. The following is the same code as the above example excluding the convert_to parameter in convert():

import coremltools as ct

# Define the input type as image, 
# set pre-processing parameters to normalize the image 
# to have its values in the interval [-1,1] 
# as expected by the mobilenet model
image_input = ct.ImageType(shape=(1, 224, 224, 3,),
                           bias=[-1,-1,-1], scale=1/127)

# set class labels
classifier_config = ct.ClassifierConfig(class_labels)

# Convert the model using the Unified Conversion API to a neural network
model = ct.convert(
    keras_model, 
    inputs=[image_input], 
    classifier_config=classifier_config,
)

👍

ML Programs vs. Neural Networks

To learn the differences between the newer ML Program model type and the neural network model type, see Comparing ML Programs and Neural Networks.

Set the Model Metadata

After converting the model, you can set additional metadata for the model in order to take advantage of Xcode features:

  • Xcode can display a Preview tab of the model’s output for a given input. The classifier model for this example does not need further metadata to display the Preview tab. For other examples, see Xcode Model Preview Types.
  • Xcode can show descriptions of inputs and outputs as code comments for documentation, and display them under the Predictions tab.
  • Xcode can also show additional metadata (such as the license and author) within the Xcode UI.
# Set feature descriptions (these show up as comments in XCode)
model.input_description["input_1"] = "Input image to be classified"
model.output_description["classLabel"] = "Most likely image category"

# Set model author name
model.author = '"Original Paper: Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, Liang-Chieh Chen'

# Set the license of the model
model.license = "Please see https://github.com/tensorflow/tensorflow for license information, and https://github.com/tensorflow/models/tree/master/research/slim/nets/mobilenet for the original source of the model."

# Set a short description for the Xcode UI
model.short_description = "Detects the dominant objects present in an image from a set of 1001 categories such as trees, animals, food, vehicles, person etc. The top-1 accuracy from the original publication is 74.7%."

# Set a version for the model
model.version = "2.0"

For a detailed example, see Integrating a Core ML Model into Your App.

Make Predictions

To verify the conversion programmatically, Core ML Tools provides the predict() API method to evaluate a Core ML model. This method is only available on macOS, as it requires the Core ML framework to be present.

📘

Note

Core ML models can be imported and executed with TVM, which may provide a way to test Core ML models on non-macOS systems.

Compare predictions made by the converted model with predictions made by the source model:

2100

An example test image. Right-click and choose Save Image to download.

# Use PIL to load and resize the image to expected size
from PIL import Image
example_image = Image.open("daisy.jpg").resize((224, 224))

# Make a prediction using Core ML
out_dict = model.predict({"input_1": example_image})

# Print out top-1 prediction
print(out_dict["classLabel"])

📘

Note

You may find differences between predictions on macOS and your target platform (such as iOS or watchOS), so you still need to verify on your target platform. For more information on using the predict API, see the article on model prediction.

Save and Load the Model

With the converted ML program model in memory, you can save the model into the Core ML model package by specifying .mlpackage with the save() method. You can then load the model into another session:

# Save model as a Core ML model package
model.save("MobileNetV2.mlpackage")
# Load the saved model
loaded_model = ct.models.MLModel("MobileNetV2.mlpackage")

If you converted the model to the default neural network model type, you can save the model into either the model package format as in the previous example, or into the older .mlmodel file format. The following example saves the model in an .mlmodel file and loads the model into another session:

# Save model in a Core ML mlmodel file
model.save("MobileNetV2.mlmodel")
                  
# Load the saved model
loaded_model = ct.models.MLModel("MobileNetV2.mlmodel")

Use the Model with Xcode

Double-click the newly converted MobileNetV2.mlpackage or MobileNetV2.mlmodel file in the Mac Finder to launch Xcode and open the model information pane:

1824

The classifier model for this example offers tabs for Metadata, Preview, Predictions, and Utilities. The Preview tab appears for classifier models, and for models that have added preview metadata as described in Xcode Model Preview Types.

📘

Tip

To use the model with an Xcode project, drag the model file to the Xcode Project Navigator. Choose options if you like, and click Finish. You can then select the model in the Project Navigator to show the model information.

Click the Predictions tab to see the model’s input and output.

1733

To preview the model’s output for a given input, follow these steps:

  1. Click the Preview tab.
  2. Drag an image into the image well on the left side of the model preview.
1748
  1. The output appears in the preview pane under the image.
1748

For more information about using Xcode, see the Xcode documentation.