You can convert a model trained in PyTorch to the Core ML format directly, without requiring an explicit step to save the PyTorch model in ONNX format. Converting the model directly is recommended. (This feature was introduced in coremltools 4.0.)
Minimum deployment target
The Unified Conversion API produces Core ML models for iOS 13, macOS 10.15, watchOS 6, tvOS 13 or newer deployment targets. If your primary deployment target is iOS 12 or earlier, you can find limited conversion support for PyTorch models via the onnx-coreml package.
Generate a TorchScript version
TorchScript is an intermediate representation of a PyTorch model. To generate a TorchScript representation from PyTorch code, use PyTorch's JIT tracer (torch.jit.trace
) to trace the model, as shown in the following example:
import torch
import torchvision
# Load a pre-trained version of MobileNetV2
torch_model = torchvision.models.mobilenet_v2(pretrained=True)
# Set the model in evaluation mode.
torch_model.eval()
# Trace the model with random data.
example_input = torch.rand(1, 3, 224, 224)
traced_model = torch.jit.trace(torch_model, example_input)
out = traced_model(example_input)
The process of tracing takes an example input and traces its flow through the model. You can trace the model by creating an example image input, as shown in the above code using random data. To understand the reasons for tracing and how to trace a PyTorch model, see Model Tracing.
Set the model to evaluation mode
To ensure that operations such as dropout are disabled, it's important to set the model to evaluation mode (not training mode) before tracing. This setting also results in a more optimized version of the model for conversion.
If your model uses a data-dependent control flow, such as a loop or conditional, the traced model won't generalize to other inputs. In such cases you can experiment with applying PyTorch's JIT script (torch.jit.script
) to your model as described in Model Scripting. You can also use a combination of tracing and scripting.
Convert to Core ML
Convert the traced or scripted model to Core ML using the Unified Conversion API convert() method. In the inputs
parameter, you can use either TensorType
or ImageType
as the input type. The following example uses TensorType
. For image input conversions, see more details here
# Using image_input in the inputs parameter:
# Convert to Core ML using the Unified Conversion API.
model = ct.convert(
traced_model,
inputs=[ct.TensorType(shape=example_input.shape)]
)
With the converted model in Core ML memory, you can save the model into the Core ML format:
# Save the converted model.
model.save("mobilenet.mlmodel")
For more information
- To learn how TorchScript works, see the Introduction to TorchScript.
- To learn how to get better performance and more convenience when using images as inputs, see Image Inputs.
- For the details of how to preprocess image input for models in PyTorch's torchvision library, see Preprocessing for Torch.
- For examples of converting PyTorch models, see the following:
Updated 5 days ago
What's Next
Model Tracing |
Model Scripting |
Convert a Natural Language Processing Model |
Convert a torchvision Model from PyTorch |
Convert a PyTorch Segmentation Model |