coremltools

Use Core ML to integrate machine learning models into your app. Core ML provides a unified representation for all models. Your app uses Core ML APIs and user data to make predictions, and to train or fine-tune models, all on the user’s device.

Core ML optimizes on-device performance by leveraging the CPU, GPU, and Neural Engine while minimizing its memory footprint and power consumption. Running a model strictly on the user’s device removes any need for a network connection, which helps keep the user’s data private and your app responsive.

🚧

Deprecated in the next version

Support for ONNX Open Neural Network eXchange will be deprecated in the next version of coremltools.

❗️

Not recommended for PyTorch conversion

Use the PyTorch converter for PyTorch models. Although the ONNX to Core ML converter was used in previous versions of coremltools, new features will not be added to it.

ONNX Open Neural Network eXchange is a file format shared across many neural network training frameworks.

You can convert a model from ONNX to Core ML using the following code:

import coremltools as ct

# Convert from ONNX to Core ML
model  = ct.converters.onnx.convert(model='my_model.onnx')

Minimum target

The argument minimum_ios_deployment_target controls the set of Core ML layers used by the converter. When you set its value to 12, the converter uses only the set of layers that were shipped in Core ML during the iOS 12 and macOS 14 release cycles. Use this setting to produce a Core ML model that can be deployed to iOS 12 and higher.

If this setting results in an error due to an unsupported operator or parameter, set the target to 13 so that the converter can utilize all the layers (including control flow and recurrent layers) that were shipped in Core ML in iOS 13.

Operators

🚧

ONNX version support

ONNX to Core ML supports ONNX Opset version 10 and older.

The following are the supported operators:

Some of the operators are partially compatible with Core ML. For example:

  • gemm with more than one non-constant input is not supported in Core ML 2.
  • Scale as an input for upsample layer is not supported in Core ML 3.

For unsupported ops or unsupported attributes within supported ops, use Core ML custom layers or custom functions. For details on producing Core ML models with custom layers and custom functions, see the testing script tests/custom_layers_test.py.

Supported models

The following models in the ONNX repository have been tested to work with this converter:

  • BVLC Alexnet
  • BVLC Caffenet
  • BVLC Googlenet
  • BVLC reference_rcnn_ilsvrc13
  • Densenet
  • Emotion-FERPlus
  • Inception V1
  • Inception V2
  • MNIST
  • Resnet50
  • Shufflenet
  • SqueezeNet
  • VGG
  • ZFNet

Examples

For PyTorch models, see the direct PyTorch to Core ML converter.

For ONNX models, see the Jupyter notebooks in the coremltools v3.4 neural network examples.

Conversion API parameters

The following is a list of parameters supported by the ONNX to Core ML conversion API. For definitions, see the API Reference.

from coremltools.converters.onnx import convert

def convert(model,
            mode=None,
            image_input_names=[],
            preprocessing_args={},
            image_output_names=[],
            deprocessing_args={},
            class_labels=None,
            predicted_feature_name='classLabel',
            add_custom_layers=False,
            custom_conversion_functions={},
            minimum_ios_deployment_target='13')

Updated 3 months ago


ONNX


Suggested Edits are limited on API Reference Pages

You can only suggest edits to Markdown body content, but not to the API spec.