Use Core ML to integrate machine learning models into your app. Core ML provides a unified representation for all models. Your app uses Core ML APIs and user data to make predictions, and to train or fine-tune models, all on the user’s device.

Core ML optimizes on-device performance by leveraging the CPU, GPU, and Neural Engine while minimizing its memory footprint and power consumption. Running a model strictly on the user’s device removes any need for a network connection, which helps keep the user’s data private and your app responsive.

You can convert a LibSVM model to the Core ML format using coremltools.converters.libsvm.convert(model, input_names='input', target_name='target', probability='classProbability', input_length='auto').

# Make a LIBSVM model
import svmutil
problem = svmutil.svm_problem([0,0,1,1], [[0,1], [1,1], [8,9], [7,7]])
libsvm_model = svmutil.svm_train(problem, svmutil.svm_parameter())

# Convert using default input and output names
import coremltools as ct
coreml_model = ct.converters.libsvm.convert(libsvm_model)

# Save the Core ML model to a file.'./my_model.mlmodel')

# Convert using user specified input names
coreml_model = ct.converters.libsvm.convert(libsvm_model, input_names=['x', 'y'])

For more information, see the API reference.

Updated about a month ago


Suggested Edits are limited on API Reference Pages

You can only suggest edits to Markdown body content, but not to the API spec.