In this post, we will learn how to develop customize criteria for tuning a machine learning model using the “caret” package. There are two things that need to be done in order to completely assess a model using customized features. These two steps are…
- Determine the model evaluation criteria
- Create a grid of parameters to optimize
DETERMINE the MODEL EVALUATION CRITERIA
We are going to begin by using the “trainControl” function to indicate to R what re-sampling method we want to use, the number of folds in the sample, and the method for determining the best model. Remember, that there are many more options but these are the ones we will use. All this information must be saved into a variable using the “trainControl” function. Later, the information we place into the variable will be used when we rerun our model.
For our example, we are going to code the following information into a variable we will call “chck” for resampling we will use k-fold cross-validation. The number of folds will be set to 10. The criteria for selecting the best model will be the through the use of the “oneSE” method. The “oneSE” method selects the simplest model within one standard error of the best performance. Below is the code for our variable “chck”
chck<-trainControl(method = "cv",number = 10, selectionFunction = "oneSE")
For now, this information is stored to be used later
CREATE GRID OF PARAMETERS TO OPTIMIZE
We now need to create a grid of parameters. The grid is essential the characteristics of each model. For the C5.0 model we need to optimize the model, the number of trials, and if winnowing was used. Therefore we will do the following.
- For model, we want decision trees only
- Trials will go from 1-35 by increments of 5
- For winnowing, we do not want any winnowing to take place.
In all, we are developing 8 models. We know this based on the trial parameter which is set to 1, 5, 10, 15, 20, 25, 30, 35. To make the grid we use the “expand.grid” function. Below is the code.
modelGrid<-expand.grid(.model ="tree", .trials= c(1,5,10,15,20,25,30,35), .winnow="FALSE")
CREATE THE MODEL
We are now ready to generate our model. We will use the kappa statistic to evaluate each model’s performance
set.seed(1) customModel<- train(sex ~., data=Wages1, method="C5.0", metric="Kappa", trControl=chck, tuneGrid=modelGrid)
## C5.0 ## ## 3294 samples ## 3 predictors ## 2 classes: 'female', 'male' ## ## No pre-processing ## Resampling: Cross-Validated (10 fold) ## Summary of sample sizes: 2966, 2965, 2964, 2964, 2965, 2964, ... ## Resampling results across tuning parameters: ## ## trials Accuracy Kappa Accuracy SD Kappa SD ## 1 0.5922991 0.1792161 0.03328514 0.06411924 ## 5 0.6147547 0.2255819 0.03394219 0.06703475 ## 10 0.6077693 0.2129932 0.03113617 0.06103682 ## 15 0.6077693 0.2129932 0.03113617 0.06103682 ## 20 0.6077693 0.2129932 0.03113617 0.06103682 ## 25 0.6077693 0.2129932 0.03113617 0.06103682 ## 30 0.6077693 0.2129932 0.03113617 0.06103682 ## 35 0.6077693 0.2129932 0.03113617 0.06103682 ## ## Tuning parameter 'model' was held constant at a value of tree ## ## Tuning parameter 'winnow' was held constant at a value of FALSE ## Kappa was used to select the optimal model using the one SE rule. ## The final values used for the model were trials = 5, model = tree ## and winnow = FALSE.
The actual output is similar to the model that “caret” can automatically create. The difference here is that the criteria was set by us rather than automatically. A close look reveals that all of the models perform poorly but that there is no change in performance after ten trials.
This post provided a brief explanation of developing a customized way of assessing a models performance. To complete this, you need to configure your options as well as setup your grid in order to assess a model. Understanding the customization process for evaluating machine learning models is one of the strongest ways to develop supremely accurate models that retain generalizability.