coremltools

Use Core ML to integrate machine learning models into your app. Core ML provides a unified representation for all models. Your app uses Core ML APIs and user data to make predictions, and to train or fine-tune models, all on the user’s device.

Core ML optimizes on-device performance by leveraging the CPU, GPU, and Neural Engine while minimizing its memory footprint and power consumption. Running a model strictly on the user’s device removes any need for a network connection, which helps keep the user’s data private and your app responsive.

New Features

The following sections describe new features and improvements in the most recent versions of Core ML Tools.

New in Core ML Tools 6

The coremltools 6 package offers the following new features to optimize the model conversion process:

For a full list of changes, see Release Notes.

To install Core ML Tools version 6, see Installing Core ML Tools.

Model Compression Utilities

Version 6 provides new utilities to compress the weights of a Core ML model in the ML program format. Weight compression reduces the space occupied by the model. You can apply linear quantization to produce 8-bit weights, use a linear histogram or k-means clustering algorithm to represent the weights in a lookup table (LUT), or use sparse representation for zero-value weights.

For details, see Compressing ML Program Weights.

Float 16 Input/output Types Including Image

Starting in iOS 16 and macOS 13, you can use OneComponent16Half Grayscale images and float 16 MLMultiarrays for model inputs and outputs. You can also specify an ImageType for input and for output.

The new float 16 types help eliminate extra casts at inputs and outputs for models that execute in float 16 precision.

You can create a model that accepts float 16 inputs and outputs by specifying a new color layout for images or a new data type for MLMultiarrays while invoking the coremltools convert() method.

The following examples show how you can specify the inputs and outputs arguments with convert() to be float 16, including GRAYSCALE_FLOAT16 images:

import cormeltools as ct 

# float 16 input and output of type multiarray
mlmodel = ct.convert(
    source_model,
    inputs=[ct.TensorType(shape=input.shape, dtype=np.float16)],
    outputs=[ct.TensorType(dtype=np.float16)],
    minimum_deployment_target=ct.target.iOS16,
)

# float 16 input and output of type grayscale images
mlmodel = ct.convert(
    source_model,
    inputs=[ct.ImageType(shape=input.shape, 
                         color_layout=ct.colorlayout.GRAYSCALE_FLOAT16)],
    outputs=[ct.ImageType(color_layout=ct.colorlayout.GRAYSCALE_FLOAT16)],
    minimum_deployment_target=ct.target.iOS16,
)

This feature is available only if the minimum_deployment_target is specified as iOS16.

New in Core ML Tools 5

The coremltools 5.2 package offers several performance improvements over previous versions, including the following features:

  • ML program: A new model type that represents neural network computation as programmatic instructions, offers more control over the precision of its intermediate tensors and better performance.
  • Core ML model package: A new model container format that separates the model into components and offers more flexible metadata editing and better source control.

For a full list of changes from coremltools 4.1, see Release Notes.

Using ML Programs

As machine learning (ML) models evolve in sophistication and complexity, their representations are also evolving to describe how they work. ML programs are neural networks expressed as operations in code. For more information, see ML Programs

Saving a Core ML Model Package

Core ML models have been represented in the file system in the .mlmodel binary file format, which encodes and stores the implementation details and complexities of the model. You would add an .mlmodel file to an Xcode project, and write code that works with it.

The Core ML model package uses the macOS package facility as a container that stores each of a model’s components in its own file, separating out the model's metadata from its architecture and weights (for ML Programs).

13141314

By saving the model as a package, you can edit its metadata and input-output descriptions, and track changes with source control. Similar to useful comments in code, useful metadata and descriptions help others understand the model’s intent and use cases.

Core ML and Xcode still fully support the original .mlmodel format, but you can save a model package for all of the model types that the original .mlmodel format supports.

To learn how to update a .mlmodel file to a .mlpackage file with Xcode 13, see Updating a Model File to a Model Package. Updating an mlmodel to an ML Package does not alter the underlying Core ML model type or the model behavior.

Another way to save an ML Package is to update the save method that you use in your coremltools scripts.

# coremltools 5
model.save("my_model.mlpackage")

# coremltools 4
model.save("my_model.mlmodel")

Updated about a month ago


New Features


Suggested Edits are limited on API Reference Pages

You can only suggest edits to Markdown body content, but not to the API spec.