In an ever-evolving machine learning space, new operations are regularly added to TensorFlow or PyTorch. Therefore, while converting a model to Core ML, an unsupported operation error may occur. These types of issues can be easily handled using composite operators.

A composite operation is not currently supported in Core ML, but can be defined by decomposing it into existing MIL operations.

This example demonstrates the use of a composite operator by converting a recent model called T5, available in the transformers library.

### Import and Convert the Pre-Trained Model

- Add the import statement and load the pre-trained model from this library:

```
from transformers import TFT5Model
model = TFT5Model.from_pretrained('t5-small')
```

- The returned object is an instance of a
`tf.keras`

model, which can be passed directly into the Core ML converter:

```
import coremltools as ct
mlmodel = ct.convert(model)
```

For the purpose of this illustration, we disable one of the operations needed to convert this model. This simulates the scenarios where coremltools is actually lacking support for an operation.

Let's disable `Einsum`

operation for this conversion.

```
from coremltools.converters.mil.frontend.tensorflow.tf_op_registry import _TF_OPS_REGISTRY
del _TF_OPS_REGISTRY["Einsum"]
```

On running the conversion now, the following error occurs, indicating an unsupported TensorFlow operation:

### Decompose into existing MIL operators

The TensorFlow documentation on Einsum refers to Einstein summation notation. This notation can be used to represent a variety of tensor operations like `reduce_sum`

, `transpose`

, `trace`

, etc. using a string.

In general, Einsum is a complicated operation. However, for this particular conversion, there's no need to worry about all the possible cases, so focus only on the particular notation that this model uses.

Looking at the error trace, we find that this model uses the following notation for Einsum.

This translates to the following mathematical expression:

This may look complicated, but it is effectively a batched matrix multiplication, with a transpose on the second input. Meaning:

This can be easily decomposed into existing MIL operators. In fact, MIL supports this operation directly. To write a composite op:

- Import MIL builder and a decorator:

```
from coremltools.converters.mil import Builder as mb
from coremltools.converters.mil import register_tf_op
```

- Define a function with the same name as the TensorFlow operation. For this example, this is Einsum.

Note

The function should also be decorated, to register it with the converter. This ensures that this user-defined function will be invoked whenever an Einsum operation is encountered during the conversion.

```
@register_tf_op
def Einsum(context, node):
assert node.attr['equation'] == 'bnqd,bnkd->bnqk'
a = context[node.inputs[0]]
b = context[node.inputs[1]]
x = mb.matmul(x=a, y=b, transpose_x=False,
transpose_y=True, name=node.name)
context.add(node.name, x)
```

As far as the function definition is concerned, simply grab inputs and define a `matmul`

operation using the MIL builder.

- With composite operation for Einsum defined, call the Core ML converter again and print
`mlmodel`

. This verifies if the conversion is completed, and implies that the unsupported operation error is resolved.

```
mlmodel = ct.convert(model)
print(mlmodel)
```

Updated 3 months ago