coremltools

Use Core ML to integrate machine learning models into your app. Core ML provides a unified representation for all models. Your app uses Core ML APIs and user data to make predictions, and to train or fine-tune models, all on the user’s device.

Core ML optimizes on-device performance by leveraging the CPU, GPU, and Neural Engine while minimizing its memory footprint and power consumption. Running a model strictly on the user’s device removes any need for a network connection, which helps keep the user’s data private and your app responsive.

Xcode Model Preview Types

After converting models to the Core ML format, you can set up the Xcode preview of certain models by adding preview metadata and parameters.

Overview

The following table shows the types of models that work with the Xcode preview feature, and the preview metadata and parameters you need to provide.

📘

Note

Some model architecture types, such as Neural Network Classifier, don't require a model.preview.type, and some model preview types don't require preview parameters.

Architecture/Preview type

model.preview.type

model.preview.parameters

Input

Output

Segmentation

imageSegmenter

{'labels': ['LABEL', ...], 'colors': ['HEXCODE', ...]}

Image

MultiArray

BERT QA

bertQA

MultiArray

MultiArray

Pose Estimation

poseEstimation

{"width_multiplier": FLOAT, "output_stride": INT}

Image

MultiArray

Image Classifier

Image

Dict, string

Depth Estimation

depthEstimation

Image

MultiArray

Segmentation example

The following example demonstrates how to add an Xcode preview for a segmentation model. Follow these steps:

  1. Load the converted model.
  2. Set up the parameters. This example collects them in labels_json.
  3. Define the model.preview.type metadata as "imageSegmenter".
  4. Define the model.preview.parameters as labels_json.
  5. Save the model.
# load the model
mlmodel = ct.models.MLModel("SegmentationModel_no_metadata.mlmodel")

labels_json = {"labels": ["background", "aeroplane", "bicycle", "bird", "board", "bottle", "bus", "car", "cat", "chair", "cow", "diningTable", "dog", "horse", "motorbike", "person", "pottedPlant", "sheep", "sofa", "train", "tvOrMonitor"]}

mlmodel.user_defined_metadata["com.apple.coreml.model.preview.type"] = "imageSegmenter"
mlmodel.user_defined_metadata['com.apple.coreml.model.preview.params'] = json.dumps(labels_json)

mlmodel.save("SegmentationModel_with_metadata.mlmodel")

📘

Note

For the full code to convert the model, see Convert a PyTorch Segmentation Model.

Open the model in Xcode

To launch Xcode and open the model information pane, double-click the saved SegmentationModel_with_metadata.mlmodel file in the Mac Finder.

The Segmentation model for this example offers tabs for Metadata, Preview, Predictions, and Utilities. Click the Predictions tab to see the model’s input and output.

Preview the model in Xcode

To preview the model’s output for a given input, follow these steps:

📘

Note

The preview for a segmentation model is available in Xcode 12.3 or newer.

  1. Click the Preview tab.
  2. Drag an image into the image well on the left side of the model preview. The result appears in the preview pane.

📘

Note

For the full code to convert the model, see Convert a PyTorch Segmentation Model.

BERT QA example

For a BERT QA model, follow these steps to add the metadata for the Xcode preview:

  1. Load the converted model.
  2. Define the model.preview.type metadata as "bertqa".
  3. Save the model.
model = ct.models.MLModel("BERT_no_preview_type.mlmodel")
model.user_defined_metadata["com.apple.coreml.model.preview.type"] = "bertQA"
model.save("BERT_with_preview_type.mlmodel")

Open the model in Xcode

Double-click the BERT_with_preview_type.mlmodel file in the Mac Finder to launch Xcode and open the model information pane:

The BERT QA model for this example offers tabs for Metadata, Preview, Predictions, and Utilities. Click the Predictions tab to see the model’s input and output.

Preview the model in Xcode

To preview the model’s performance, follow these steps:

  1. Click the Preview tab.
  2. Copy and paste sample text, such as the BERT QA model description, into the Passage Context field.
  3. Enter a question in the Question field, such as What is BERT? The answer appears in the Answer Candidate field, and is also highlighted in the Passage Context field.

Body Pose example

For a body pose model such as PoseNet, follow these steps to add the metadata for an Xcode preview:

  1. Load the converted model.
  2. Define the model.preview.type metadata as "poseEstimation".
  3. Provide the preview parameters for {"width_multiplier": FLOAT, "output_stride": INT}. You can learn more about these parameters in posenet_model.ts in Pre-trained TensorFlow.js models on GitHub.
  4. Save the model.
model = ct.models.MLModel("posenet_no_preview_type.mlmodel") 
model.user_defined_metadata["com.apple.coreml.model.preview.type"] = "poseEstimation"
params_json = {"width_multiplier": 1.0, "output_stride": 16}
model.user_defined_metadata['com.apple.coreml.model.preview.params'] = json.dumps(params_json)
model.save("posenet_with_preview_type.mlmodel")

Open the model in Xcode

Double-click the posenet_with_preview_type.mlmodel file in the Mac Finder to launch Xcode and open the model information pane:

The PoseNet model for this example offers tabs for Metadata, Preview, Predictions, and Utilities. Click the Predictions tab to see the model’s input and output.

Preview the model in Xcode

To preview the model's output for a given input, follow these steps using the following sample image:

Right-click and choose **Save Image** to download this test image. ("Figure from a Crèche: Standing Man" is in the public domain, available from [creativecommons.org](https://creativecommons.org "Creative Commons").)Right-click and choose **Save Image** to download this test image. ("Figure from a Crèche: Standing Man" is in the public domain, available from [creativecommons.org](https://creativecommons.org "Creative Commons").)

Right-click and choose Save Image to download this test image. ("Figure from a Crèche: Standing Man" is in the public domain, available from creativecommons.org.)

  1. Click the Preview tab.
  2. Drag the above image into the image well on the left side of the model preview.
  3. The result appears in the preview pane. It shows a single pose estimation (the key points of the body pose) under the Single tab, and estimates of multiple poses under the Multiple tab.

Updated 9 months ago


Xcode Model Preview Types


Suggested Edits are limited on API Reference Pages

You can only suggest edits to Markdown body content, but not to the API spec.