How to use the Model Leaderboard to compare and select models?

In this article, you will learn how to use the model leaderboard to quickly compare and select models that best suit your needs.

After your models are successfully trained, they are ready for use. These models are listed on the model leaderboard for you to easily compare their performance and select the most appropriate model to use.

Note: Only successfully trained models are listed on the model leaderboard. If you don’t see some models that you selected for training on the leaderboard, go to the model listing tab to check the status of the models.

1. Navigate to the project

There are 3 ways to reach your project:

a. From the homepage, click on the relevant project.

Go to project

b. If the project is not listed on the homepage, click View all to see all projects.

View all projects

c. From the project listings page, click on the project.

gotoProject3

Note: You can only see the projects that you are granted access to. If you can’t find a project, try checking with your organization owner. For more details about access and administration, see 

this article.

2. Navigate to the app

Navigate to the app by searching for the app quick access area on the right-hand side of the project home screen. The top 3 most recent apps are listed here. Click on the app name under the quick access are.

If the app isn’t listed on the app quick access area, try hovering the mouse over Apps on the left navigation bar to show the dropdown and then click on the app name

If the app isn’t listed there, click on View all to see all apps. Then, from the app list, click on the app name.

app page on the AI & Analytics Engine

3. Navigate to the Model Leaderboard

Navigate your way to the models listing page by clicking on Models on the left navigation panel.

model leaderboard on the AI & Analytics Engine

4. Compare model performance and select a model to deploy

The Engine compares and presents the following on the summary cards for ease of access:

  1. The model with the highest prediction quality

  2. The model with the shortest prediction time

  3. The model with the shortest training time

model performance comparison and deployment

In addition to these summary cards, you can look at the ranking table to compare all models. You can select the criteria to sort the models by selecting one of the following three options from the “Ranking based on“ dropdown right above the table:

1. Prediction quality

    1. The metrics used to calculate prediction quality vary depending on the problem type and the training dataset.

Tip: For more information about the metrics used to calculate prediction quality, read this article

2. Prediction time: The time it takes for the model to generate predictions for 1000 rows of data


3. Training time: The time it takes to train the model using the Engine. This also includes the time taken to find the best parameters for the model (tuning).

Tip: To learn more about the model training process, see this article


You can also click on the column headers to sort the models. In particular, by clicking on the column header “Prediction quality”, the models will be sorted in descending order based on their prediction qualities. Clicking the “Prediction time” and “Training time” headers, sorts the models in ascending order of the corresponding time.

If you are satisfied with the model performance, you can use the models to

  1. Make batch predictions by going to the “Prediction” tab of a model

  2. Make online predictions via API calls by deploying 1 or more models. You can clicking on the “Deploy” button for each model on

    1. The summary cards; or

    2. Each row of the ranking table, by hovering the mouse over a model’s row

prediction

deploy

Tip: For step-by-step instruction on how to deploy trained machine learning model, read this article.

Tip: If you are still curious for more details on model insights, read this article.