Use Core ML to integrate machine learning models into your app. Core ML provides a unified representation for all models. Your app uses Core ML APIs and user data to make predictions, and to train or fine-tune models, all on the userโ€™s device.

Core ML optimizes on-device performance by leveraging the CPU, GPU, and Neural Engine while minimizing its memory footprint and power consumption. Running a model strictly on the userโ€™s device removes any need for a network connection, which helps keep the userโ€™s data private and your app responsive.

Unified Conversion API

With coremltools 4.0, you can convert models from TensorFlow 1, TensorFlow 2, TensorFlow's Keras APIs, and PyTorch through a single API. This page will demonstrate a few examples to provide a quick overview of the Unified Conversion API.


Unified Conversion APIs support only TensorFlow and PyTorch

Currently, the Unified Conversion API only works on a limited set of conversions. For converting models using multi-backend Keras, Caffe, and ONNX, you need to use the conversion APIs specific to those packages.


PyTorch Compatibility version

Currently, the latest supported version of PyTorch is 1.6.0

Unified Conversion API

The Unified Conversion API can convert neural networks represented in TensorFlow or PyTorch formats to the Core ML model format. This works by loading the model to infer its type, then converting it to Core ML. The formats supported include:


Supported Format

TensorFlow versions 1.x

TensorFlow versions 2.x


  • TorchScript object
  • TorchScript object saved as a .pt file

Conversion Examples

General Conversion Process

The general conversion process consists of loading the model, then using the convert() method to convert the model. For example:

  1. Load in the model.
import coremltools as ct

# Load TensorFlow model
import tensorflow as tf # Tf 2.2.0

tf_model = tf.keras.applications.MobileNet()
import coremltools as ct

# Load PyTorch model (and perform tracing)
torch_model = torchvision.models.mobilenet_v2()

example_input = torch.rand(1, 3, 256, 256)
traced_model = torch.jit.trace(torch_model, example_input)
  1. Convert the model using convert():
# Convert using the same API
model_from_tf = ct.convert(tf_model)
# Convert using the same API. Note that we need to provide "inputs" for pytorch conversion.
model_from_torch = ct.convert(traced_model,
                              inputs=[ct.TensorType(name="input", shape=example_input.shape)])

Once the model is converted, you will have an MLModel object which you can use to make predictions, change metadata, or save to the Core ML format for use in Xcode.
For more information, see the ML Model section.


Additional options for conversion

Note that the convert() function tries to infer as much as possible from the source network, but sometimes that information may not be present (e.g input names, types, shapes and classifier options).

Conversion from TensorFlow 2

TensorFlow 2 models are typically exported as tf.Model objects in a SavedModel or HDF5 file formats. See the TensorFlow Model Conversion section, for additional TensorFlow formats that can be converted.

This example demonstrates how to convert a Xception model from tf.keras.applications

import coremltools as ct 
import tensorflow as tf

# Load from .h5 file
tf_model = tf.keras.applications.Xception(weights="imagenet", 
                                          input_shape=(299, 299, 3))

# Convert to Core ML
model = ct.convert(tf_model)

Conversion from TensorFlow 1

The conversion API can also convert models from TensorFlow 1. These models are generally exported with the extension .pb, in the frozen protobuf file format, using TensorFlow 1's freeze graph utility. This model can be directly passed into the convert() API method.

This example demonstrates how to convert a pre-trained MobileNet model to Core ML.

Note: To run this example, you will need to download this file.

import coremltools as ct

# Convert a frozen graph from TensorFlow 1 to Core ML
mlmodel = ct.convert("mobilenet_v1_1.0_224/frozen_graph.pb")

This MobileNet model already has an input shape defined. Therefore, it wasn't required to provide the converter with an input shape.

In some cases, the TensorFlow model does not contain a fully defined input shape. Therefore the user can pass in an input shape that is compatible with the model into the convert() method.

For example, this file download will need to have additional shape information provided.

import coremltools as ct

# Needs additional shape information
mlmodel = ct.convert("mobilenet_v2_1.0_224_frozen.pb",
                    inputs=[ct.TensorType(shape=(1, 224, 224, 3))])

Conversion from PyTorch

PyTorch models that are traced or in the TorchScript format can be converted. For more details, see the PyTorch conversion section.

For this example, a model obtained using PyTorch's save and load APIs will be converted to Core ML using the same Unified Conversion API as the previous example.

import coremltools as ct
import torch
import torchvision

# Get a pytorch model and save it as a *.pt file
model = torchvision.models.mobilenet_v2()
example_input = torch.rand(1, 3, 224, 224)
traced_model = torch.jit.trace(model, example_input)"")

# Convert the saved PyTorch model to Core ML
mlmodel = ct.convert("",
                    inputs=[ct.TensorType(shape=(1, 3, 224, 224))])

For more details on tracing and scripting to produce torch models for conversion, see the Torch Conversion section.

Unified Conversion API Reference

For the full list of arguments supported by the coremltools.convert API, see the reference section. For some common scenarios, see the sections on image inputs, classifiers and flexible inputs under Conversion Options.

Updated 21 days ago

Unified Conversion API

Suggested Edits are limited on API Reference Pages

You can only suggest edits to Markdown body content, but not to the API spec.