Use Core ML to integrate machine learning models into your app. Core ML provides a unified representation for all models. Your app uses Core ML APIs and user data to make predictions, and to train or fine-tune models, all on the user’s device.

Core ML optimizes on-device performance by leveraging the CPU, GPU, and Neural Engine while minimizing its memory footprint and power consumption. Running a model strictly on the user’s device removes any need for a network connection, which helps keep the user’s data private and your app responsive.


This page lists short code examples and full examples of using coremltools to convert models.



For a quick start

Quickstart Example: Demonstrates how to convert an image classifier model trained using the TensorFlow Keras API to the Core ML format.

ML program with typed execution

Typed Execution Workflow Example: Demonstrates a workflow for checking accuracy using ML Programs with Typed Execution.

TensorFlow 2

Load and convert a model
Convert from TensorFlow 2
Convert a pre-trained model
Convert a user-defined model

Full examples:
Quickstart Example: Demonstrates how to convert an image classifier model trained using the TensorFlow Keras API to the Core ML format.
Convert TensorFlow 2 BERT Transformer Models: Converts an object of the tf.keras.Model class and a SavedModel in the TensorFlow 2 format.

TensorFlow 1

Convert from TensorFlow 1
Export as frozen graph and convert
Convert a pre-trained model

Full examples:
Convert a TensorFlow 1 Image Classifier: Demonstrates the importance of setting the image preprocessing parameters correctly during conversion to get the right results.
Convert a TensorFlow 1 DeepSpeech Model: Demonstrates automatic handling of flexible shapes using automatic speech recognition.


Convert from PyTorch
Model Tracing
Model Scripting

Full examples:
PyTorch Conversion: Converts into Core ML a MobileNetV2 model trained using PyTorch.
Convert a PyTorch Segmentation Model: Converts a PyTorch segmentation model that takes an image and outputs a class prediction for each pixel of the image.

Model Intermediate Language (MIL)

Model Intermediate Language: Construct a MIL program using the Python builder.

Conversion Options

Image Inputs:
Convert a model with a MultiArray
Convert a model with an ImageType
Add image preprocessing options

Classifiers: Produce a classifier model

Flexible Input Shapes:
Select from predetermined shapes
Set the range for each dimension
Enable unbounded ranges
Set a default shape

Composite Operators: Defining a composite operation by decomposing it into MIL operations.

Full examples:
Custom Operators: Augment Core ML with your own operators and implement them in Swift.


Quantization: Reduce the size of the Core ML model produced by conversion.

Other Converters

Multi-backend Keras

Trees and Linear Models



MLModel Overview:
Load and save the MLModel
Use the MLModel for prediction
Work with the spec object
Update the metadata and input/output descriptions

Model Prediction:
Make predictions
Multi-array prediction
Image prediction
Image prediction for a multi-array model

Xcode Model Preview Types:
Segmentation example
BERT QA example
Body Pose example

MLModel Utilities:
Rename a feature
Convert all double multi-array feature descriptions to float
Evaluate classifier, regressor, and transformer models

Updatable Models

Nearest Neighbor Classifier: Create an updatable empty k-nearest neighbor.
Neural Network Classifier: Create a simple convolutional model with Keras, convert the model to Core ML, and make the model updatable.
Pipeline Classifier: Use a pipeline composed of a drawing-embedding model and a nearest neighbor classifier to create a model for training a sketch classifier.

If you have a code example you'd like to submit, see Contributing.