What does deploying a model mean?

This article explains the purpose of deploying a machine learning model.

NOTE: This specifically applies to classification & regression problem types

To make online predictions using trained models via API calls, the models need to be deployed. Additionally, to power the what-if tool, a deployment is also required.

Model deployment

Deployment Types

To facilitate real-world business needs, there are many types of deployments:

  • Edge: To deploy the model on an edge device, where the data is produced (e.g. router).
  • Cloud: To deploy the model on a public cloud, where it can be assessed over the internet 
  • Local: To deploy the model locally on an on-premise server, which can be assessed within the intranet.

Cloud Deployment

The AI & Analytics Engine current supports cloud deployment hosted with PI.EXCHANGE. The cloud deployment can then be directly managed and monitored from within the Engine itself.

Note: This article outlines how to deploy a model.

With a cloud deployment, users can then submit prediction requests to the endpoint associated with it. An endpoint is an online prediction (web) service of your model where you can submit data and “instantly” get the prediction result.