TensorFlow 2 Conversion
Starting with coremltools 4.0, you can convert neural network models from TensorFlow 2 using the Unified Converter API.
Minimum deployment target
The Unified Converter API produces Core ML models for iOS 13, macOS 10.15, watchOS 6, tvOS 13 or newer deployment targets.
To convert a TensorFlow 2 model, provide one of following formats to the converter:
tf.keras.Model- HDF5 file path (
.h5) SavedModeldirectory path- A [concrete function] (https://www.tensorflow.org/guide/function "Better performance with tf.function")
Recommended format
The most convenient way to convert from TensorFlow 2 is to use an object of the
tf.keras.Modelclass. If you download a pre-trained model (SavedModelorHDF5), first check that you can load it as atf.keras.Modeland run thepredict()method on it. Then pass the model into the coremltools converter.
This page demonstrates the following typical workflows:
- Convert a pre-trained model: Downloading a pre-trained model in the
SavedModelor.h5file format, loading it as atf.keras.Model, and then converting the model. - Convert a user-defined model: Defining a model from scratch, training it, and then converting it to the Core ML format.
Convert a pre-trained model
The following example demonstrates how to convert an Xception model in HDF5 format (a .h5 file) from tf.keras.applications:
import coremltools as ct
import tensorflow as tf
# Load from .h5 file
tf_model = tf.keras.applications.Xception(weights="imagenet",
input_shape=(299, 299, 3))
# Convert to Core ML
model = ct.convert(tf_model)
The following is another example of converting a pre-trained model. This model is downloaded from TensorFlow Hub. Follow these steps:
- Download the MobileNet
SavedModeldirectory from imagenet in TensorFlow Hub.
import tensorflow as tf
import tensorflow_hub as tf_hub
import numpy as np
model = tf.keras.Sequential([
tf.keras.layers.InputLayer(input_shape=(192, 192, 3)),
tf_hub.KerasLayer(
"https://tfhub.dev/google/imagenet/mobilenet_v2_050_192/classification/4"
)
])
model.build([1, 192, 192, 3]) # Batch input shape.
- Load the model as a Keras model, and ensure that it is loaded correctly by applying a prediction call.
# random input data to check that predict works
x = np.random.rand(1, 192, 192, 3)
tf_out = model.predict([x])
- Convert the model to Core ML without specifying the input type, in order to generate a multi-array input for convenience in checking predictions:
import coremltools as ct
# convert to Core ML and check predictions
mlmodel = ct.convert(model)
- Since the model operates on images, convert with the image input type before saving the model:
coreml_out_dict = mlmodel.predict({"image":x})
coreml_out = list(coreml_out_dict.values())[0]
np.testing.assert_allclose(tf_out, coreml_out, rtol=1e-2, atol=1e-1)
# convert to an image input Core ML model
# mobilenet model expects images to be normalized in the interval [-1,1]
# hence bias of -1 and scale of 1/127
mlmodel = ct.convert(model,
inputs=[ct.ImageType(bias=[-1,-1,-1], scale=1/127)])
mlmodel.save("mobilenet.mlmodel")
Convert a user-defined model
The most convenient way to define a model is to use the tf.keras APIs. You can define your model using sequential, functional or subclassing APIs, and then convert directly to Core ML.
Alternatively, you can first save the Keras model to the HDF5 (.h5) or the SavedModel file format, and then provide the file path with the convert() method. For details about saving the model, see Save and load Keras models.
Convert a Sequential model
The following example defines and converts a Sequential tf.keras model:
import tensorflow as tf
import coremltools as ct
tf_keras_model = tf.keras.Sequential(
[
tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(128, activation=tf.nn.relu),
tf.keras.layers.Dense(10, activation=tf.nn.softmax),
]
)
# Pass in `tf.keras.Model` to the Unified Conversion API
mlmodel = ct.convert(tf_keras_model)
# or save the keras model in SavedModel directory format and then convert
tf_keras_model.save('tf_keras_model')
mlmodel = ct.convert('tf_keras_model')
# or load the model from a SavedModel and then convert
tf_keras_model = tf.keras.models.load_model('tf_keras_model')
mlmodel = ct.convert(tf_keras_model)
# or save the keras model in HDF5 format and then convert
tf_keras_model.save('tf_keras_model.h5')
mlmodel = ct.convert('tf_keras_model.h5')
Convert a Keras model with subclassing
The following example defines and converts a Keras model with subclassing and a custom Keras layer, using a low-level TensorFlow API. The custom layer example of the functional Keras API can be converted to Core ML by passing the final model object to the converter:
import coremltools as ct
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
class CustomDense(layers.Layer):
def __init__(self, units=32):
super(CustomDense, self).__init__()
self.units = units
def build(self, input_shape):
self.w = self.add_weight(
shape=(input_shape[-1], self.units),
initializer="random_normal",
trainable=True,
)
self.b = self.add_weight(
shape=(self.units,), initializer="random_normal", trainable=True
)
def call(self, inputs):
return tf.matmul(inputs, self.w) + self.b
inputs = keras.Input((4,))
outputs = CustomDense(10)(inputs)
model = keras.Model(inputs, outputs)
mlmodel = ct.convert(model)
Convert a TensorFlow concrete function
The following example converts a TensorFlow concrete function:
import coremltools as ct
import tensorflow as tf
import numpy as np
# define a concrete TF function for approximate version of GeLU activation
@tf.function(input_signature=[tf.TensorSpec(shape=(6,), dtype=tf.float32)])
def gelu_tanh_activation(x):
a = (np.sqrt(2 / np.pi) * (x + 0.044715 * tf.pow(x, 3)))
y = 0.5 * (1.0 + tf.tanh(a))
return x * y
conc_func = gelu_tanh_activation.get_concrete_function()
# provide the concrete fucntion as a list
mlmodel = ct.convert([conc_func])
Converting a BERT transformer model
To learn how to convert an object of the
tf.keras.Modelclass, and aSavedModelin the TensorFlow 2 format, see Convert TensorFlow 2 BERT Transformer Models.
Updated over 4 years ago
