With coremltools 4.0, you can convert models from TensorFlow 1, TensorFlow 2, TensorFlow's Keras APIs, and PyTorch through a single API. This page will demonstrate a few examples to provide a quick overview of the Unified Conversion API.
Unified Conversion APIs support only TensorFlow and PyTorch
Currently, the Unified Conversion API only works on a limited set of conversions. For converting models using multi-backend Keras, Caffe, and ONNX, you need to use the conversion APIs specific to those packages.
PyTorch Compatibility version
Currently, the latest supported version of PyTorch is 1.6.0
The Unified Conversion API can convert neural networks represented in TensorFlow or PyTorch formats to the Core ML model format. This works by loading the model to infer its type, then converting it to Core ML. The formats supported include:
TensorFlow versions 1.x
TensorFlow versions 2.x
The general conversion process consists of loading the model, then using the
convert() method to convert the model. For example:
- Load in the model.
import coremltools as ct # Load TensorFlow model import tensorflow as tf # Tf 2.2.0 tf_model = tf.keras.applications.MobileNet()
import coremltools as ct # Load PyTorch model (and perform tracing) torch_model = torchvision.models.mobilenet_v2() torch_model.eval() example_input = torch.rand(1, 3, 256, 256) traced_model = torch.jit.trace(torch_model, example_input)
- Convert the model using
# Convert using the same API model_from_tf = ct.convert(tf_model)
# Convert using the same API. Note that we need to provide "inputs" for pytorch conversion. model_from_torch = ct.convert(traced_model, inputs=[ct.TensorType(name="input", shape=example_input.shape)])
Once the model is converted, you will have an
MLModel object which you can use to make predictions, change metadata, or save to the Core ML format for use in Xcode.
For more information, see the ML Model section.
Additional options for conversion
Note that the
convert()function tries to infer as much as possible from the source network, but sometimes that information may not be present (e.g input names, types, shapes and classifier options).
TensorFlow 2 models are typically exported as
tf.Model objects in a
HDF5 file formats. See the TensorFlow Model Conversion section, for additional TensorFlow formats that can be converted.
This example demonstrates how to convert a Xception model from
import coremltools as ct import tensorflow as tf # Load from .h5 file tf_model = tf.keras.applications.Xception(weights="imagenet", input_shape=(299, 299, 3)) # Convert to Core ML model = ct.convert(tf_model)
The conversion API can also convert models from TensorFlow 1. These models are generally exported with the extension
.pb, in the frozen protobuf file format, using TensorFlow 1's freeze graph utility. This model can be directly passed into the
convert() API method.
This example demonstrates how to convert a pre-trained MobileNet model to Core ML.
Note: To run this example, you will need to download this file.
import coremltools as ct # Convert a frozen graph from TensorFlow 1 to Core ML mlmodel = ct.convert("mobilenet_v1_1.0_224/frozen_graph.pb")
This MobileNet model already has an input shape defined. Therefore, it wasn't required to provide the converter with an input shape.
In some cases, the TensorFlow model does not contain a fully defined input shape. Therefore the user can pass in an input shape that is compatible with the model into the
For example, this file download will need to have additional shape information provided.
import coremltools as ct # Needs additional shape information mlmodel = ct.convert("mobilenet_v2_1.0_224_frozen.pb", inputs=[ct.TensorType(shape=(1, 224, 224, 3))])
PyTorch models that are traced or in the TorchScript format can be converted. For more details, see the PyTorch conversion section.
For this example, a model obtained using PyTorch's save and load APIs will be converted to Core ML using the same Unified Conversion API as the previous example.
import coremltools as ct import torch import torchvision # Get a pytorch model and save it as a *.pt file model = torchvision.models.mobilenet_v2() model.eval() example_input = torch.rand(1, 3, 224, 224) traced_model = torch.jit.trace(model, example_input) traced_model.save("torchvision_mobilenet_v2.pt") # Convert the saved PyTorch model to Core ML mlmodel = ct.convert("torchvision_mobilenet_v2.pt", inputs=[ct.TensorType(shape=(1, 3, 224, 224))])
For more details on tracing and scripting to produce torch models for conversion, see the Torch Conversion section.
For the full list of arguments supported by the
coremltools.convert API, see the reference section. For some common scenarios, see the sections on image inputs, classifiers and flexible inputs under Conversion Options.
Updated 21 days ago