Quickstart Example

Core ML Tools can convert trained models from other frameworks into an in-memory representation of the Core ML model. This example demonstrates how to convert an image classifier model trained using TensorFlow's Keras API to the Core ML format.

During the course of this example you will learn the following:

📘

Note

The sample model was trained on the ImageNet dataset. It can classify images from a set of 1000 predefined image categories.

In this example, you will do the following:

Download the model

Download the previously trained MobileNetV2 model, which is based on the tensorflow.keras.applications API:

import tensorflow as tf # TF 2.2.0

# Download MobileNetv2 (using tf.keras)
keras_model = tf.keras.applications.MobileNetV2(
    weights="imagenet", 
    input_shape=(224, 224, 3,),
    classes=1000,
)

Retrieve the class labels from a separate file, as shown in the example below. These class labels are required for your application to understand what the predictions refer to, and are often provided separately in popular machine learning frameworks.

# Download class labels (from a separate file)
import urllib
label_url = 'https://storage.googleapis.com/download.tensorflow.org/data/ImageNetLabels.txt'
class_labels = urllib.request.urlopen(label_url).read().splitlines()
class_labels = class_labels[1:] # remove the first class which is background
assert len(class_labels) == 1000

# make sure entries of class_labels are strings
for i, label in enumerate(class_labels):
  if isinstance(label, bytes):
    class_labels[i] = label.decode("utf8")

Now that the model is loaded into Keras and the class labels are collected, you can convert the model to Core ML.

📘

Tip

Before converting the model, a good practice to improve device performance is to know in advance the types and shapes, and to consider the model's interface, including the names and types of inputs and outputs. For this example, the application is an image classifier, and therefore the model's input is an ImageType of a specified size. When using inputs of an ImageType, it is also important to find out how the model expects its input to be preprocessed or normalized.

Convert the model

Use the following code to convert the model to Core ML with an image as the input, and the class labels baked into the model:

import coremltools as ct

# Define the input type as image, 
# set pre-processing parameters to normalize the image 
# to have its values in the interval [-1,1] 
# as expected by the mobilenet model
image_input = ct.ImageType(shape=(1, 224, 224, 3,),
                           bias=[-1,-1,-1], scale=1/127)

# set class labels
classifier_config = ct.ClassifierConfig(class_labels)

# Convert the model using the Unified Conversion API
model = ct.convert(
    keras_model, inputs=[image_input], classifier_config=classifier_config,
)

Set the model metadata

After converting the model, you can set additional metadata for the model in order to take advantage of Xcode features:

  • Xcode can display a Preview tab of the model’s output for a given input. The classifier model for this example does not need further metadata to display the Preview tab. For other examples, see Xcode Model Preview Types.
  • Xcode can show descriptions of inputs and outputs as code comments for documentation, and display them under the Predictions tab.
  • Xcode can also show additional metadata (such as the license and author) within the Xcode UI.
# Set feature descriptions (these show up as comments in XCode)
model.input_description["input_1"] = "Input image to be classified"
model.output_description["classLabel"] = "Most likely image category"

# Set model author name
model.author = '"Original Paper: Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, Liang-Chieh Chen'

# Set the license of the model
model.license = "Please see https://github.com/tensorflow/tensorflow for license information, and https://github.com/tensorflow/models/tree/master/research/slim/nets/mobilenet for the original source of the model."

# Set a short description for the Xcode UI
model.short_description = "Detects the dominant objects present in an image from a set of 1001 categories such as trees, animals, food, vehicles, person etc. The top-1 accuracy from the original publication is 74.7%."

# Set a version for the model
model.version = "2.0"

For a detailed example, see Integrating a Core ML Model into Your App.

Make predictions

To verify the conversion programmatically, Core ML Tools provides the predict() API method to evaluate a Core ML model. This method is only available on macOS, as it requires the Core ML framework to be present.

📘

Note

Core ML models can be imported and executed with TVM, which may provide a way to test Core ML models on non-macOS systems.

Compare predictions made by the converted model with predictions made by the source model:

2100

An example test image. Right-click and choose Save Image to download.

# Use PIL to load and resize the image to expected size
from PIL import Image
example_image = Image.open("daisy.jpg").resize((224, 224))

# Make a prediction using Core ML
out_dict = model.predict({"input_1": example_image})

# Print out top-1 prediction
print(out_dict["classLabel"])

📘

Note

You may find differences between predictions on macOS and your target platform (such as iOS or watchOS), so you still need to verify on your target platform. For more information on using the predict API, see the article on model prediction.

Save and load the model

With the converted model in Core ML memory, you can save the model into the Core ML format, and load the model back into another session:

# Save model
model.save("MobileNetV2.mlmodel")
                  
# Load a saved model
loaded_model = ct.models.MLModel("MobileNetV2.mlmodel")

Use the model with Xcode

Double-click the newly converted MobileNetV2.mlmodel file in the Mac Finder to launch Xcode and open the model information pane:

1824

The classifier model for this example offers tabs for Metadata, Preview, Predictions, and Utilities. The Preview tab appears for classifier models, and for models that have added preview metadata as described in Xcode Model Preview Types.

📘

Tip

To use the model with an Xcode project, drag the model file to the Xcode Project Navigator. Choose options if you like, and click Finish. You can then select the model in the Project Navigator to show the model information.

Click the Predictions tab to see the model’s input and output.

1733

To preview the model’s output for a given input, follow these steps:

  1. Click the Preview tab.
  2. Drag an image into the image well on the left side of the model preview.
1748
  1. The output appears in the preview pane under the image.
1748

For more information about using Xcode, see the Xcode documentation.