In this article, you will learn how to use the model leaderboard to quickly compare and select models to deploy
After your models are successfully trained, they are ready for deployment. These models are listed on the model leaderboard for you to easily compare their performance and select the most appropriate model to deploy.
Note: Only successfully trained models are listed on the model leaderboard. If you don’t see some models that you selected for training on the leaderboard, go to the model listing tab to check the status of the models.
1. Navigate to the project
There are 3 ways to reach your project:
a. From the homepage, click on the relevant project.
b. If the project is not listed on the homepage, click View all to see all projects.
c. From the project listings page, click on the project.
Note: You can only see the projects that you are granted access to. If you can’t find a project, try checking with your organization's owner.
For more details about access and administration, see this article.
2. Navigate to the app
Navigate to the app by searching for the app quick access area on the right-hand side of the project home screen. The top 3 most recent apps are listed here. Click on the app name under the quick access area.
If the app isn’t listed on the app quick access area, try hovering the mouse over Apps on the left navigation bar to show the dropdown and then click on the app name
If the app isn’t listed there, click on View all to see all apps. Then, from the app list, click on the app name.
3. Navigate to the Model Leaderboard
Navigate your way to the models listing page by clicking on Models on the left navigation panel.
4. Compare model performance and select a model to deploy
The Engine compares and presents the following on the summary cards for ease of access:
The model with the highest prediction quality
The model with the shortest prediction time
The model with the shortest training time
In addition to these summary cards, you can look at the ranking table to compare all models. You can select the criteria to sort the models by selecting one of the following three options from the “Ranking based on“ dropdown right above the table:
1. Prediction quality
The metrics used to calculate prediction quality vary depending on the problem type and the training dataset.
Tip: For more information about the metrics used to calculate prediction quality, read this article
2. Prediction time: The time it takes for the model to generate predictions for 1000 rows of data
3. Training time: The time it takes to train the model using the Engine. This also includes the time taken to find the best parameters for the model (tuning).
Tip: To learn more about the model training process, see this article.
You can also click on the column headers to sort the models. In particular, by clicking on the column header “Prediction quality”, the models will be sorted in descending order based on their prediction qualities. Clicking the “Prediction time” and “Training time” headers, sorts the models in ascending order of the corresponding time.
If you are satisfied with the model performance, you can deploy 1 or more models by clicking on the “Deploy” button for each model. You can find these buttons on:
The summary cards; or
Each row of the ranking table, by hovering the mouse over a model’s row.
4. Still curious? View the detailed performance reports of a model!
If you are still curious for more details on model performance, you can go to each model’s detail page by clicking on the model’s name on:
The summary card; or
The ranking table