Use Core ML to integrate machine learning models into your app. Core ML provides a unified representation for all models. Your app uses Core ML APIs and user data to make predictions, and to train or fine-tune models, all on the user’s device.

Core ML optimizes on-device performance by leveraging the CPU, GPU, and Neural Engine while minimizing its memory footprint and power consumption. Running a model strictly on the user’s device removes any need for a network connection, which helps keep the user’s data private and your app responsive.

Composite Operators

In an ever-evolving machine learning space, new operations are regularly added to TensorFlow or PyTorch. Therefore, while converting a model to Core ML, an unsupported operation error may occur. These types of issues can be easily handled using composite operators.

A composite operation is not currently supported in Core ML, but can be defined by decomposing it into existing MIL operations.

This example demonstrates the use of a composite operator by converting a recent model called T5, available in the transformers library.

Import and Convert the Pre-Trained Model

  1. Add the import statement and load the pre-trained model from this library:
from transformers import TFT5Model

model = TFT5Model.from_pretrained('t5-small')
  1. The returned object is an instance of a tf.keras model, which can be passed directly into the Core ML converter:
import coremltools as ct

mlmodel = ct.convert(model)

For the purpose of this illustration, we disable one of the operations needed to convert this model. This simulates the scenarios where coremltools is actually lacking support for an operation.

Let's disable Einsum operation for this conversion.

from import _TF_OPS_REGISTRY

del _TF_OPS_REGISTRY["Einsum"]

On running the conversion now, the following error occurs, indicating an unsupported TensorFlow operation:

Decompose into existing MIL operators

The TensorFlow documentation on Einsum refers to Einstein summation notation. This notation can be used to represent a variety of tensor operations like reduce_sum, transpose, trace, etc. using a string.

In general, Einsum is a complicated operation. However, for this particular conversion, there's no need to worry about all the possible cases, so focus only on the particular notation that this model uses.

Looking at the error trace, we find that this model uses the following notation for Einsum.

This translates to the following mathematical expression:

This may look complicated, but it is effectively a batched matrix multiplication, with a transpose on the second input. Meaning:

This can be easily decomposed into existing MIL operators. In fact, MIL supports this operation directly. To write a composite op:

  1. Import MIL builder and a decorator:
from import Builder as mb

from import register_tf_op
  1. Define a function with the same name as the TensorFlow operation. For this example, this is Einsum.



The function should also be decorated, to register it with the converter. This ensures that this user-defined function will be invoked whenever an Einsum operation is encountered during the conversion.

def Einsum(context, node):
    assert node.attr['equation'] == 'bnqd,bnkd->bnqk'

    a = context[node.inputs[0]]
    b = context[node.inputs[1]]

    x = mb.matmul(x=a, y=b, transpose_x=False, 

    context.add(, x)

As far as the function definition is concerned, simply grab inputs and define a matmul operation using the MIL builder.

  1. With composite operation for Einsum defined, call the Core ML converter again and print mlmodel. This verifies if the conversion is completed, and implies that the unsupported operation error is resolved.
mlmodel = ct.convert(model)

Updated 3 months ago

Composite Operators

Suggested Edits are limited on API Reference Pages

You can only suggest edits to Markdown body content, but not to the API spec.