coremltools

Use Core ML to integrate machine learning models into your app. Core ML provides a unified representation for all models. Your app uses Core ML APIs and user data to make predictions, and to train or fine-tune models, all on the user’s device.

Core ML optimizes on-device performance by leveraging the CPU, GPU, and Neural Engine while minimizing its memory footprint and power consumption. Running a model strictly on the user’s device removes any need for a network connection, which helps keep the user’s data private and your app responsive.

Utilities

This page discusses useful utilities for ML Model objects and the spec objects.

Rename an input / output of the model

You can rename a feature in the specification using the rename_feature() method. For example:

import coremltools as ct

# Get the protobuf spec
model = ct.models.MLModel('MyModel.mlmodel')
spec = model.get_spec()
                        
# Edit the spec
ct.utils.rename_feature(spec, 'old_feature_name', 'new_feature_name')

Convert a multi-array feature description from double to float

You can convert all double multi-arrays feature descriptions (input, output, training input) to float multi-arrays using the convert_double_to_float_multiarray_type() method. For example:

import coremltools as ct

# Get the protobuf spec
model = ct.models.MLModel('MyModel.mlmodel')
spec = model.get_spec()
                        
# In-place convert multi-array type of spec
ct.utils.convert_double_to_float_multiarray_type(spec)

Evaluate Classifier, Regressors, Transformer

You can evaluate a Core ML classifier model and compare it against predictions from the original framework, to test the correctness of a conversion using the evaluate_classifier() method. Use this evaluation for models that don’t deal with probabilities. For example:

import coremltools as ct

# Evaluate a classifier specification for testing.
metrics =  ct.utils.evaluate_classifier(spec, 
                   'data_and_predictions.csv', 'target')

# Print metrics
print(metrics)
{"samples": 10, num_errors: 0}
// Evaluate a CoreML regression model and compare against predictions from the original framework (for testing correctness of conversion)

metrics = coremltools.utils.evaluate_regressor(spec, 'data_and_predictions.csv', 'target')
print(metrics)
{"samples": 10, "rmse": 0.0, max_error: 0.0}
// Evaluate a transformer specification for testing.

input_data = [{'input_1': 1, 'input_2': 2}, {'input_1': 3, 'input_2': 3}]
expected_output = [{'input_1': 2.5, 'input_2': 2.0}, {'input_1': 1.3, 'input_2': 2.3}]
metrics = coremltools.utils.evaluate_transformer(scaler_spec, input_data, expected_output)

For the full list of utilities, see API reference

Updated 2 months ago


Utilities


Suggested Edits are limited on API Reference Pages

You can only suggest edits to Markdown body content, but not to the API spec.