What does deploying a model mean?

This article explains the purpose of deploying a machine learning model.

After training a model, the goal is to use it to make predictions with new data. A model can be trained locally on your computer, however, it would only be accessible to you.

Deploying a model typically means making it available for others to use, or putting it online. This allows you to issue API calls to the models, send them data, and get back the prediction.

Deployment Types

To facilitate real-world business needs, there are many types of deployments:

  • Edge: To deploy the model on an edge device, where the data is produced (e.g. router)

  • Cloud: To deploy the model on a public cloud, where it can be assessed over the internet 

  • Local: To deploy the model locally on an on-premise server, which can be assessed within the intranet.

Model deployment in the AI & Analytics Engine Cloud

  • In the AI & Analytics Engine, deployments are possible for problems with an underlying type of classification or regression

  • The Engine currently supports cloud deployment, and on-premise deployment

  • The cloud deployment is hosted with PI.EXCHANGE, and can be directly managed and monitored from within the Engine itself

  • Once the model is deployed in the PI.EXCHANGE cloud, users can then submit prediction requests to the endpoint associated with it

  • An endpoint is an online prediction (web) service of your model where you can submit data and “instantly” get the prediction result

🎓 For more information deploying a model in the Engine, read how to call my deployment for predictions? and how to use API Key in prediction requests?

💡 Model deployment is also required to use the “what-if” tool. For more information, read this what is what-if analysis.