Use Core ML to integrate machine learning models into your app. Core ML provides a unified representation for all models. Your app uses Core ML APIs and user data to make predictions, and to train or fine-tune models, all on the user’s device.

Core ML optimizes on-device performance by leveraging the CPU, GPU, and Neural Engine while minimizing its memory footprint and power consumption. Running a model strictly on the user’s device removes any need for a network connection, which helps keep the user’s data private and your app responsive.

Model Specification

A key component of Core ML is the public specification for representing machine learning models. This specification is defined in a protobuf and can be created using any language supported by protobuf (e.g., Python, C++, Java, C#, Perl, etc.).

At a high level, the protobuf specification consists of:

  • Model description: This encodes name and type information of the inputs and outputs to the model.
  • Model parameters: The set of parameters required to represent a specific instance of the model.
  • Metadata: Information about the origin, license, and author of the model.

For more information, please reference the Core ML model specification.

Types of MLModels

Core ML supports conversion of trained models from a variety of training tools, for integration into apps. ML models can be a pipeline, neural network, trees, etc.

See the model specification reference for the full list of model types in Core ML specification.

What is an MLModel object and a Spec object?

An MLModel encapsulates a model's prediction methods, configuration, and model description.

Spec object is the parsed protobuf object of the Core ML model.

import coremltools as ct

# Load MLModel
mlmodel = ct.models.MLModel('path/to/the/model.mlmodel')

# use model for prediction

# save the model'path/to/the/saved/model.mlmodel')

# Get spec from the model
spec = mlmodel.get_spec()

# print input/output description for the model

# get the type of Model (NeuralNetwork, SupportVectorRegressor, Pipeline etc)

# save out the model directly from the spec
ct.models.utils.save_spec(spec, 'path/to/the/saved/model.mlmodel')

# convert spec to MLModel, this step compiles the model as well
mlmodel = ct.models.MLModel(spec)

# Load the spec from the saved .mlmodel file directly
spec = ct.models.utils.load_spec('path/to/the/model.mlmodel')

All the converters in coremltools, return the converted model as MLModel object. Which can be then used to save the model, run it by calling predictions, modify the input output descriptions, set metadata etc.
Lets look at an example of converting a model from Scikit learn, which predicts the price of a house based on 3 features (bedroom, bath, size). The mlmodel object that is returned is used to update the metadata and input/output descriptions, that are displayed in the Xcode UI.

from sklearn.linear_model import LinearRegression
import pandas as pd

# Load data
data = pd.read_csv('houses.csv')

# Train a model
model = LinearRegression()[["bedroom", "bath", "size"]], data["price"])

# Convert and save the scikit-learn model
import coremltools as ct

model = ct.converters.sklearn.convert(model, ["bedroom", "bath", "size"], "price")

# Set model metadata = 'John Smith'
model.license = 'BSD'
model.short_description = 'Predicts the price of a house in the Seattle area.'
model.version = '1'

# Set feature descriptions manually
model.input_description['bedroom'] = 'Number of bedrooms'
model.input_description['bathrooms'] = 'Number of bathrooms'
model.input_description['size'] = 'Size (in square feet)'

# Set the output descriptions
model.output_description['price'] = 'Price of the house'

# Save the model'HousePricer.mlmodel')

Updated 5 months ago


Suggested Edits are limited on API Reference Pages

You can only suggest edits to Markdown body content, but not to the API spec.