This page offers frequently asked questions (FAQs):

Core ML Tools versions

Core ML Tools 5: For details, see New in coremltools. Highlights:

  • Core ML models now also use a directory format, called .mlpackage, rather than just a protobuf file.
  • Added a new backend: ML program, which offers typed execution and a new GPU runtime backed by MPSGraph.

Core ML Tools 4: Major upgrade. Hightlights:

  • Introduced the Unified Conversion API, coremltools.convert, to convert models from TensorFlow 1, TensorFlow 2 (tf.keras), and PyTorch.
  • Introduced Model Intermediate Language (MIL) as an internal intermediate representation (IR) for unifying the conversion pipeline, and added graph passes to this common IR. Passes that improve performance continue to be added, so we recommend that you always use the latest version of coremltools to convert your models.

PyTorch conversion

  • Prior to coremltools 4: Use onnx-coreml, which internally calls into coremltools.
  • coremltools 4 and newer: Use the Unified Conversion API. onnx-coreml is frozen and no longer updated or maintained.

Keras conversion

Since coremltools 4, the coremltools.keras.convert converter is no longer maintained, and is officially deprecated in coremltools 5 . The Unified Conversion API supports conversion of tf.keras models, with a TensorFlow 2 (TF2) backend.

If you have an older Keras.io model that uses TensorFlow 1 (TF1), we recommend exporting it as a TF1 frozen graph def (.pb) file. You can then convert ths file using the Unified Conversion API. For an example of how to export the old keras model to .pb, see method _save_h5_as_frozen_pb in the Troubleshooting section of the coremltools 3 Neural Network Guide.

Fixing high numerical error

For a neural network, set the compute unit to CPU as described in Set the compute units. For example:

# neural networks
model = ct.convert(source_model,
                   compute_units=ct.ComputeUnit.CPU_ONLY)
# or when loading the model
model = ct.models.MLModel("model.mlmodel", compute_units=ct.ComputeUnit.CPU_ONLY)

# now when prediction is called on this model, it will use the 
# higher precision Float32 CPU path for execution. 

# to check the compute unit of an already loaded model,
# simply check the property 
model.compute_unit

For an ML program, set compute_precision to Float 32 as described in Set the ML program precision. For example:

# ml programs

# provide a higher compute precision during conversion
model = ct.convert(source_model, compute_precision=ct.precision.FLOAT32)

For more information, see Typed Execution.

Image preprocessing for converting torchvision

Preprocessing parameters differ between torchvision and coremltools but can be easily translated, as described in Add image preprocessing options. For example, you can set the scale and bias for an ImageType, which corresponds to the torchvision parameters:

scale = 1/(0.226*255.0)
bias = [- 0.485/(0.229) , - 0.456/(0.224), - 0.406/(0.225)]
image_input = ct.ImageType(shape=example_input.shape, 
                           scale=scale, 
                           bias=bias)

Error in declaring network or computing NN outputs

File an issue at the coremltools Github repository by following the instructions in Issues and queries. As a workaround, try using CPUOnly compute units during conversion, as described in Set the compute units. For example:

import coremltools as ct
model = ct.convert(tf_model, compute_units=ct.ComputeUnit.CPU_ONLY)

# or if loading a pre converted model
model = ct.models.MLModel("model.mlmodel", compute_units=ct.ComputeUnit.CPU_ONLY)

Starting a deep learning Core ML model

You can define a Core ML model directly by building it with the MIL builder API. This API is similar to the torch.nn or the tf.keras API for model construction. For an example, see Create a MIL program.

Handling an unsupported op

Be sure that you are using the newest version of coremltools. If you still get this error, please file an issue at coremltools Github by following the instructions in Issues and queries.

As a workaround, you may want to write a translation function from the missing op to the existing MIL ops. For examples, see Composite Operators.

Choosing custom names for input and outputs

When using ct.convert(), the input names and output names are automatically picked up by the converter from the source model. After conversion you can see these names by doing one of the following:

model = ct.models.MLModel('MyModel.mlmodel')
spec = model.get_spec()

# get input names
input_names = [inp.name for inp in spec.description.input]

# get output names
output_names = [out.name for out in spec.description.output]

You can update these names by using the rename_feature API.