coremltools

Use Core ML to integrate machine learning models into your app. Core ML provides a unified representation for all models. Your app uses Core ML APIs and user data to make predictions, and to train or fine-tune models, all on the userโ€™s device.

Core ML optimizes on-device performance by leveraging the CPU, GPU, and Neural Engine while minimizing its memory footprint and power consumption. Running a model strictly on the userโ€™s device removes any need for a network connection, which helps keep the userโ€™s data private and your app responsive.

ML Programs

This page describes the ML program model type. It is an evolution of the neural network model type that has been available since the first version of Core ML. This page describes how to:

๐Ÿ“˜

Foundation for future improvements

Core ML is investing in the ML program model type as a foundation for future improvements. ML programs are available for the iOS15, macOS12, watchOS8, and tvOS15 deployment targets. For details, see Availability of ML programs.

The ML program type is recommended for newer deployment targets. You can also use the neural network type, which is supported in iOS13/macOS10.15 and above. For a comparison, see Comparing ML programs to neural networks.

Convert models to ML programs

You can convert a TensorFlow or PyTorch model, or a model created directly in the Model Intermediate Language (MIL), to a Core ML model that is either an ML program or a neural network. The Unified Conversion API can produce either type of model with the convert() method.

Convert to the default neural network

As with previous versions of coremltools, if you don't specify the model type, or your minimum_deployment_target is a version older than iOS15, macOS12, watchOS8, or tvOS15, the TensorFlow or PyTorch model is converted to a neural network, as shown in the following example:

import coremltools as ct

# conversion to neural network format
# by default a neural network is produced. This is same as coremltools 4
model = ct.convert(source_model)

Convert to an ML program

To convert a TensorFlow or PyTorch model to an ML program, do one of the following:

Specify the model type directly

To convert TensorFlow or PyTorch model to an ML program, you can specify the model type with the convert_to parameter as shown in the following example:

# provide the "convert_to" argument to convert to ML Programs
model = ct.convert(source_model, convert_to="mlprogram")

Specify the minimum deployment target

Since ML programs are available in iOS15 or macOS12, you can instead use the minimum_deployment_target parameter as shown in the following example:

# provide the "minimum_deployment_target" argument to convert to ML Programs
model = ct.convert(source_model, 
                   minimum_deployment_target=ct.target.iOS15)
# or
model = ct.convert(source_model, 
                   minimum_deployment_target=ct.target.macOS12)

(Optional) Set the ML program precision

You can optionally set the precision type (float 16 or float 32) of the weights and the intermediate tensors in the ML program during conversion. The ML program type offers an additional compute_precision parameter as shown in the following example:

# produce a Float 16 typed model
# this is also the default if compute_precision argument is skipped
model = ct.convert(source_model, 
                   convert_to="mlprogram", 
                   compute_precision=ct.precision.FLOAT16)
                    
# produce a Float 32 typed model,
# useful if the model needs higher precision, and float 16 is not sufficient 
model = ct.convert(source_model, 
                   convert_to="mlprogram", 
                   compute_precision=ct.precision.FLOAT32)

For details on ML program precision, see Typed Execution.

๐Ÿ“˜

Float 16 default

For ML programs, coremltools 5.0b3 and higher, produces a model with float 16 precision by default (previous beta versions produced float 32 by default). You can override the default precision by using the compute_precision parameter of coremltools.convert().

Save ML programs as model packages

The ML program type uses the Core ML model package container format that separates the model into components and offers more flexible metadata editing. Since an ML program decouples the weights from the program architecture, it cannot be saved as an .mlmodel file.

Use the save() method to save a file with the .mlpackage extension, as shown in the following example:

model.save("my_model.mlpackage")

๐Ÿšง

Requires Xcode 13 and newer

The model package format is supported on Xcode 13

Find the model type in a model package

If you need to determine whether an mlpackage file contains a neural network or an ML program, you can open it in Xcode 13, and look at the model type.

On a Linux system you can use coremltools 5 to inspect this property:

# load MLModel object
model = ct.models.MLModel("model.mlpackage")

# get the spec object
spec = model.get_spec()
print("model type: {}".format(spec.WhichOneof('Type')))

Availability of ML programs

The ML program model type is available as summarized in the following table:

Neural Networks

ML Program

Minimum deployment target

macOS 10.13
iOS 11
watchOS 4
tvOS 11

macOS 12
iOS 15
watchOS 8
tvOS 15

Supported file formats

.mlmodel or .mlpackage

.mlpackage

Updated 8 months ago



ML Programs


Suggested Edits are limited on API Reference Pages

You can only suggest edits to Markdown body content, but not to the API spec.