Category Archives: R programming

Polynomial Spline Regression in R

Normally, when least squares regression is used, you fit one line to the model. However, sometimes you may want enough flexibility that you fit different lines over different regions of your independent variable. This process of fitting different lines over different regions of X is known as Regression Splines.

How this works is that there are different coefficient values based on the regions of X. As the researcher, you can set the cutoff points for each region. The cutoff point is called a “knot.” The more knots you use the more flexible the model becomes because there are fewer data points with each range allowing for more variability.

We will now go through an example of polynomial regression splines. Remeber that polynomial means that we will have a curved line as we are using higher order polynomials. Our goal will be to predict total sales based on the amount of innovation a store employs. We will use the “Ecdat” package and the “Clothing” dataset. In addition, we will need the “splines” package. The code is as follows.

library(splines);library(Ecdat)
data(Clothing)

We will now fit our model. We must indicate the number and placement of the knots. This is commonly down at the 25th 50th and 75th percentile. Below is the code

fit<-lm(tsales~bs(inv2,knots = c(12000,60000,150000)),data = Clothing)

In the code above we used the traditional “lm” function to set the model. However, we also used the “bs” function which allows us to create our spline regression model. The argument “knots” was set to have three different values. Lastly, the dataset was indicated.

Remember that the default spline model in R is a third-degree polynomial. This is because it is hard for the eye to detect the discontinuity at the knots.

We now need X values that we can use for prediction purposes. In the code below we first find the range of the “inv2” variable. We then create a grid that includes all the possible values of “inv2” in increments of 1. Lastly, we use the “predict” function to develop the prediction model. We set the “se” argument to true as we will need this information. The code is below.

inv2lims<-range(Clothing$inv2)
inv2.grid<-seq(from=inv2lims[1],to=inv2lims[2])
pred<-predict(fit,newdata=list(inv2=inv2.grid),se=T)

We are now ready to plot our model. The code below graphs the model and includes the regression line (red), confidence interval (green), as well as the location of each knot (blue)

plot(Clothing$inv2,Clothing$tsales,main="Regression Spline Plot")
lines(inv2.grid,pred$fit,col='red',lwd=3)
lines(inv2.grid,pred$fit+2*pred$se.fit,lty="dashed",lwd=2,col='green')
lines(inv2.grid,pred$fit-2*pred$se.fit,lty="dashed",lwd=2,col='green')
segments(12000,0,x1=12000,y1=5000000,col='blue' )
segments(60000,0,x1=60000,y1=5000000,col='blue' )
segments(150000,0,x1=150000,y1=5000000,col='blue' )

1.png

When this model was created it was essentially three models connected. Model on goes from the first blue line to the second. Model 2 goes form the second blue line to the third and model three was from the third blue line until the end. This kind of flexibility is valuable in understanding  nonlinear relationship

Logistic Polynomial Regression in R

Polynomial regression is used when you want to develop a regression model that is not linear. It is common to use this method when performing traditional least squares regression. However, it is also possible to use polynomial regression when the dependent variable is categorical. As such, in this post, we will go through an example of logistic polynomial regression.

Specifically, we will use the “Clothing” dataset from the “Ecdat” package. We will divide the “tsales” dependent variable into two categories to run the analysis. Below is the code to get started.

library(Ecdat)
data(Clothing)

There is little preparation for this example. Below is the code for the model

fitglm<-glm(I(tsales>900000)~poly(inv2,4),data=Clothing,family = binomial)

Here is what we did

1. We created an object called “fitglm” to save our results
2. We used the “glm” function to process the model
3. We used the “I” function. This told R to process the information inside the parentheses as is. As such, we did not have to make a new variable in which we split the “tsales” variable. Simply, if sales were greater than 900000 it was code 1 and 0 if less than this amount.
4. Next, we set the information for the independent variable. We used the “poly” function. Inside this function, we placed the “inv2” variable and the highest order polynomial we want to explore.
5. We set the data to “Clothing”
6. Lastly, we set the “family” argument to “binomial” which is needed for logistic regression

Below is the results

summary(fitglm)
## 
## Call:
## glm(formula = I(tsales > 9e+05) ~ poly(inv2, 4), family = binomial, 
##     data = Clothing)
## 
## Deviance Residuals: 
##     Min       1Q   Median       3Q      Max  
## -1.5025  -0.8778  -0.8458   1.4534   1.5681  
## 
## Coefficients:
##                Estimate Std. Error z value Pr(>|z|)  
## (Intercept)       3.074      2.685   1.145   0.2523  
## poly(inv2, 4)1  641.710    459.327   1.397   0.1624  
## poly(inv2, 4)2  585.975    421.723   1.389   0.1647  
## poly(inv2, 4)3  259.700    178.081   1.458   0.1448  
## poly(inv2, 4)4   73.425     44.206   1.661   0.0967 .
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## (Dispersion parameter for binomial family taken to be 1)
## 
##     Null deviance: 521.57  on 399  degrees of freedom
## Residual deviance: 493.51  on 395  degrees of freedom
## AIC: 503.51
## 
## Number of Fisher Scoring iterations: 13

It appears that only the 4th-degree polynomial is significant and barely at that. We will now find the range of our independent variable “inv2” and make a grid from this information. Doing this will allow us to run our model using the full range of possible values for our independent variable.

inv2lims<-range(Clothing$inv2)
inv2.grid<-seq(from=inv2lims[1],to=inv2lims[2])

The “inv2lims” object has two values. The lowest value in “inv2” and the highest value. These values serve as the highest and lowest values in our “inv2.grid” object. This means that we have values started at 350 and going to 400000 by 1 in a grid to be used as values for “inv2” in our prediction model. Below is our prediction model.

predsglm<-predict(fitglm,newdata=list(inv2=inv2.grid),se=T,type="response")

Next, we need to calculate the probabilities that a given value of “inv2” predicts a store has “tsales” greater than 900000. The equation is as follows.

pfit<-exp(predsglm$fit)/(1+exp(predsglm$fit))

Graphing this leads to interesting insights. Below is the code

plot(pfit)

1

You can see the curves in the line from the polynomial expression. As it appears. As inv2 increase the probability increase until the values fall between 125000 and 200000. This is interesting, to say the least.

We now need to plot the actual model. First, we need to calculate the confidence intervals. This is done with the code below.

se.bandsglm.logit<-cbind(predsglm$fit+2*predsglm$se.fit,predsglm$fit-2*predsglm$se.fit)
se.bandsglm<-exp(se.bandsglm.logit)/(1+exp(se.bandsglm.logit))

The ’se.bandsglm” object contains the log odds of each example and the “se.bandsglm” has the probabilities. Now we plot the results

plot(Clothing$inv2,I(Clothing$tsales>900000),xlim=inv2lims,type='n')
points(jitter(Clothing$inv2),I((Clothing$tsales>900000)),cex=2,pch='|',col='darkgrey')
lines(inv2.grid,pfit,lwd=4)
matlines(inv2.grid,se.bandsglm,col="green",lty=6,lwd=6)

1.pngIn the code above we did the following.
1. We plotted our dependent and independent variables. However, we set the argument “type” to n which means nothing. This was done so we can add the information step-by-step.
2. We added the points. This was done using the “points” function. The “jitter” function just helps to spread the information out. The other arguments (cex, pch, col) our for aesthetics and our optional.
3. We add our logistic polynomial line based on our independent variable grid and the “pfit” object which has all of the predicted probabilities.
4. Last, we add the confidence intervals using the “matlines” function. Which includes the grid object as well as the “se.bandsglm” information.

You can see that these results are similar to when we only graphed the “pfit” information. However, we also add the confidence intervals. You can see the same dip around 125000-200000 were there is also a larger confidence interval. if you look at the plot you can see that there are fewer data points in this range which may be what is making the intervals wider.

Conclusion

Logistic polynomial regression allows the regression line to have more curves to it if it is necessary. This is useful for fitting data that is non-linear in nature.

Polynomial Regression in R

Polynomial regression is one of the easiest ways to fit a non-linear line to a data set. This is done through the use of higher order polynomials such as cubic, quadratic, etc to one or more predictor variables in a model.

Generally, polynomial regression is used for one predictor and one outcome variable. When there are several predictor variables it is more common to use generalized additive modeling/ In this post, we will use the “Clothing” dataset from the “Ecdat” package to predict total sales with the use of polynomial regression. Below is some initial code.

library(Ecdat)
data(Clothing)
str(Clothing)
## 'data.frame':    400 obs. of  13 variables:
##  $ tsales : int  750000 1926395 1250000 694227 750000 400000 1300000 495340 1200000 495340 ...
##  $ sales  : num  4412 4281 4167 2670 15000 ...
##  $ margin : num  41 39 40 40 44 41 39 28 41 37 ...
##  $ nown   : num  1 2 1 1 2 ...
##  $ nfull  : num  1 2 2 1 1.96 ...
##  $ npart  : num  1 3 2.22 1.28 1.28 ...
##  $ naux   : num  1.54 1.54 1.41 1.37 1.37 ...
##  $ hoursw : int  76 192 114 100 104 72 161 80 158 87 ...
##  $ hourspw: num  16.8 22.5 17.2 21.5 15.7 ...
##  $ inv1   : num  17167 17167 292857 22207 22207 ...
##  $ inv2   : num  27177 27177 71571 15000 10000 ...
##  $ ssize  : int  170 450 300 260 50 90 400 100 450 75 ...
##  $ start  : num  41 39 40 40 44 41 39 28 41 37 ...

We are going to use the “inv2” variable as our predictor. This variable measures the investment in automation by a particular store. We will now run our polynomial regression model.

fit<-lm(tsales~poly(inv2,5),data = Clothing)
summary(fit)
## 
## Call:
## lm(formula = tsales ~ poly(inv2, 5), data = Clothing)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -946668 -336447  -96763  184927 3599267 
## 
## Coefficients:
##                Estimate Std. Error t value Pr(>|t|)    
## (Intercept)      833584      28489  29.259  < 2e-16 ***
## poly(inv2, 5)1  2391309     569789   4.197 3.35e-05 ***
## poly(inv2, 5)2  -665063     569789  -1.167   0.2438    
## poly(inv2, 5)3    49793     569789   0.087   0.9304    
## poly(inv2, 5)4  1279190     569789   2.245   0.0253 *  
## poly(inv2, 5)5  -341189     569789  -0.599   0.5497    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 569800 on 394 degrees of freedom
## Multiple R-squared:  0.05828,    Adjusted R-squared:  0.04633 
## F-statistic: 4.876 on 5 and 394 DF,  p-value: 0.0002428

The code above should be mostly familiar. We use the “lm” function as normal for regression. However, we then used the “poly” function on the “inv2” variable. What this does is runs our model 5 times (5 is the number next to “inv2”). Each time a different polynomial is used from 1 (no polynomial) to 5 (5th order polynomial). The results indicate that the 4th-degree polynomial is significant.

We now will prepare a visual of the results but first, there are several things we need to prepare. First, we want to find what the range of our predictor variable “inv2” is and we will save this information in a grade. The code is below.

inv2lims<-range(Clothing$inv2)

Second, we need to create a grid that has all the possible values of “inv2” from the lowest to the highest broken up in intervals of one. Below is the code.

inv2.grid<-seq(from=inv2lims[1],to=inv2lims[2])

We now have a dataset with almost 400000 data points in the “inv2.grid” object through this approach. We will now use these values to predict “tsales.” We also want the standard errors so we se “se” to TRUE

preds<-predict(fit,newdata=list(inv2=inv2.grid),se=TRUE)

We now need to find the confidence interval for our regression line. This is done by making a dataframe that takes the predicted fit adds or subtracts 2 and multiples this number by the standard error as shown below.

se.bands<-cbind(preds$fit+2*preds$se.fit,preds$fit-2*preds$se.fit)

With these steps completed, we are ready to create our civilization.

To make our visual, we use the “plot” function on the predictor and outcome. Doing this gives us a plot without a regression line. We then use the “lines” function to add the polynomial regression line, however, this line is based on the “inv2.grid” object (40,000 observations) and our predictions. Lastly, we use the “matlines” function to add the confidence intervals we found and stored in the “se.bands” object.

plot(Clothing$inv2,Clothing$tsales)
lines(inv2.grid,preds$fit,lwd=4,col='blue')
matlines(inv2.grid,se.bands,lwd = 4,col = "yellow",lty=4)

1.png

Conclusion

You can clearly see the curvature of the line. Which helped to improve model fit. Now any of you can tell that we are fitting this line to mostly outliers. This is one reason we the standard error gets wider and wider it is because there are fewer and fewer observations on which to base it. However, for demonstration purposes, this is a clear example of the power of polynomial regression.

Partial Least Squares Regression in R

Partial least squares regression is a form of regression that involves the development of components of the original variables in a supervised way. What this means is that the dependent variable is used to help create the new components form the original variables. This means that when pls is used the linear combination of the new features helps to explain both the independent and dependent variables in the model.

In this post, we will use predict “income” in the “Mroz” dataset using pls. Below is some initial code.

library(pls);library(Ecdat)
data("Mroz")
str(Mroz)
## 'data.frame':    753 obs. of  18 variables:
##  $ work      : Factor w/ 2 levels "yes","no": 2 2 2 2 2 2 2 2 2 2 ...
##  $ hoursw    : int  1610 1656 1980 456 1568 2032 1440 1020 1458 1600 ...
##  $ child6    : int  1 0 1 0 1 0 0 0 0 0 ...
##  $ child618  : int  0 2 3 3 2 0 2 0 2 2 ...
##  $ agew      : int  32 30 35 34 31 54 37 54 48 39 ...
##  $ educw     : int  12 12 12 12 14 12 16 12 12 12 ...
##  $ hearnw    : num  3.35 1.39 4.55 1.1 4.59 ...
##  $ wagew     : num  2.65 2.65 4.04 3.25 3.6 4.7 5.95 9.98 0 4.15 ...
##  $ hoursh    : int  2708 2310 3072 1920 2000 1040 2670 4120 1995 2100 ...
##  $ ageh      : int  34 30 40 53 32 57 37 53 52 43 ...
##  $ educh     : int  12 9 12 10 12 11 12 8 4 12 ...
##  $ wageh     : num  4.03 8.44 3.58 3.54 10 ...
##  $ income    : int  16310 21800 21040 7300 27300 19495 21152 18900 20405 20425 ...
##  $ educwm    : int  12 7 12 7 12 14 14 3 7 7 ...
##  $ educwf    : int  7 7 7 7 14 7 7 3 7 7 ...
##  $ unemprate : num  5 11 5 5 9.5 7.5 5 5 3 5 ...
##  $ city      : Factor w/ 2 levels "no","yes": 1 2 1 1 2 2 1 1 1 1 ...
##  $ experience: int  14 5 15 6 7 33 11 35 24 21 ...

First, we must prepare our data by dividing it into a training and test set. We will do this by doing a 50/50 split of the data.

set.seed(777)
train<-sample(c(T,F),nrow(Mroz),rep=T) #50/50 train/test split
test<-(!train)

In the code above we set the “set.seed function in order to assure reduplication. Then we created the “train” object and used the “sample” function to make a vector with ‘T’ and ‘F’ based on the number of rows in “Mroz”. Lastly, we created the “test”” object base don everything that is not in the “train” object as that is what the exclamation point is for.

Now we create our model using the “plsr” function from the “pls” package and we will examine the results using the “summary” function. We will also scale the data since this the scale affects the development of the components and use cross-validation. Below is the code.

set.seed(777)
pls.fit<-plsr(income~.,data=Mroz,subset=train,scale=T,validation="CV")
summary(pls.fit)
## Data:    X dimension: 392 17 
##  Y dimension: 392 1
## Fit method: kernelpls
## Number of components considered: 17
## 
## VALIDATION: RMSEP
## Cross-validated using 10 random segments.
##        (Intercept)  1 comps  2 comps  3 comps  4 comps  5 comps  6 comps
## CV           11218     8121     6701     6127     5952     5886     5857
## adjCV        11218     8114     6683     6108     5941     5872     5842
##        7 comps  8 comps  9 comps  10 comps  11 comps  12 comps  13 comps
## CV        5853     5849     5854      5853      5853      5852      5852
## adjCV     5837     5833     5837      5836      5836      5835      5835
##        14 comps  15 comps  16 comps  17 comps
## CV         5852      5852      5852      5852
## adjCV      5835      5835      5835      5835
## 
## TRAINING: % variance explained
##         1 comps  2 comps  3 comps  4 comps  5 comps  6 comps  7 comps
## X         17.04    26.64    37.18    49.16    59.63    64.63    69.13
## income    49.26    66.63    72.75    74.16    74.87    75.25    75.44
##         8 comps  9 comps  10 comps  11 comps  12 comps  13 comps  14 comps
## X         72.82    76.06     78.59     81.79     85.52     89.55     92.14
## income    75.49    75.51     75.51     75.52     75.52     75.52     75.52
##         15 comps  16 comps  17 comps
## X          94.88     97.62    100.00
## income     75.52     75.52     75.52

The printout includes the root mean squared error for each of the components in the VALIDATION section as well as the variance explained in the TRAINING section. There are 17 components because there are 17 independent variables. You can see that after component 3 or 4 there is little improvement in the variance explained in the dependent variable. Below is the code for the plot of these results. It requires the use of the “validationplot” function with the “val.type” argument set to “MSEP” Below is the code

validationplot(pls.fit,val.type = "MSEP")

1.png

We will do the predictions with our model. We use the “predict” function, use our “Mroz” dataset but only those index in the “test” vector and set the components to three based on our previous plot. Below is the code.

set.seed(777)
pls.pred<-predict(pls.fit,Mroz[test,],ncomp=3)

After this, we will calculate the mean squared error. This is done by subtracting the results of our predicted model from the dependent variable of the test set. We then square this information and calculate the mean. Below is the code

mean((pls.pred-Mroz$income[test])^2)
## [1] 63386682

As you know, this information is only useful when compared to something else. Therefore, we will run the data with a tradition least squares regression model and compare the results.

set.seed(777)
lm.fit<-lm(income~.,data=Mroz,subset=train)
lm.pred<-predict(lm.fit,Mroz[test,])
mean((lm.pred-Mroz$income[test])^2)
## [1] 59432814

The least squares model is slightly better then our partial least squares model but if we look at the model we see several variables that are not significant. We will remove these see what the results are

summary(lm.fit)
## 
## Call:
## lm(formula = income ~ ., data = Mroz, subset = train)
## 
## Residuals:
##    Min     1Q Median     3Q    Max 
## -20131  -2923  -1065   1670  36246 
## 
## Coefficients:
##               Estimate Std. Error t value Pr(>|t|)    
## (Intercept) -1.946e+04  3.224e+03  -6.036 3.81e-09 ***
## workno      -4.823e+03  1.037e+03  -4.651 4.59e-06 ***
## hoursw       4.255e+00  5.517e-01   7.712 1.14e-13 ***
## child6      -6.313e+02  6.694e+02  -0.943 0.346258    
## child618     4.847e+02  2.362e+02   2.052 0.040841 *  
## agew         2.782e+02  8.124e+01   3.424 0.000686 ***
## educw        1.268e+02  1.889e+02   0.671 0.502513    
## hearnw       6.401e+02  1.420e+02   4.507 8.79e-06 ***
## wagew        1.945e+02  1.818e+02   1.070 0.285187    
## hoursh       6.030e+00  5.342e-01  11.288  < 2e-16 ***
## ageh        -9.433e+01  7.720e+01  -1.222 0.222488    
## educh        1.784e+02  1.369e+02   1.303 0.193437    
## wageh        2.202e+03  8.714e+01  25.264  < 2e-16 ***
## educwm      -4.394e+01  1.128e+02  -0.390 0.697024    
## educwf       1.392e+02  1.053e+02   1.322 0.186873    
## unemprate   -1.657e+02  9.780e+01  -1.694 0.091055 .  
## cityyes     -3.475e+02  6.686e+02  -0.520 0.603496    
## experience  -1.229e+02  4.490e+01  -2.737 0.006488 ** 
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 5668 on 374 degrees of freedom
## Multiple R-squared:  0.7552, Adjusted R-squared:  0.744 
## F-statistic: 67.85 on 17 and 374 DF,  p-value: < 2.2e-16
set.seed(777)
lm.fit<-lm(income~work+hoursw+child618+agew+hearnw+hoursh+wageh+experience,data=Mroz,subset=train)
lm.pred<-predict(lm.fit,Mroz[test,])
mean((lm.pred-Mroz$income[test])^2)
## [1] 57839715

As you can see the error decreased even more which indicates that the least squares regression model is superior to the partial least squares model. In addition, the partial least squares model is much more difficult to explain because of the use of components. As such, the least squares model is the favored one.

Principal Component Regression in R

This post will explain and provide an example of principal component regression (PCR). Principal component regression involves having the model construct components from the independent variables that are a linear combination of the independent variables. This is similar to principal component analysis but the components are designed in a way to best explain the dependent variable. Doing this often allows you to use fewer variables in your model and usually improves the fit of your model as well.

Since PCR is based on principal component analysis it is an unsupervised method, which means the dependent variable has no influence on the development of the components. As such, there are times when the components that are developed may not be beneficial for explaining the dependent variable.

Our example will use the “Mroz” dataset from the “Ecdat” package. Our goal will be to predict “income” based on the variables in the dataset. Below is the initial code

library(pls);library(Ecdat)
data(Mroz)
str(Mroz)
## 'data.frame':    753 obs. of  18 variables:
##  $ work      : Factor w/ 2 levels "yes","no": 2 2 2 2 2 2 2 2 2 2 ...
##  $ hoursw    : int  1610 1656 1980 456 1568 2032 1440 1020 1458 1600 ...
##  $ child6    : int  1 0 1 0 1 0 0 0 0 0 ...
##  $ child618  : int  0 2 3 3 2 0 2 0 2 2 ...
##  $ agew      : int  32 30 35 34 31 54 37 54 48 39 ...
##  $ educw     : int  12 12 12 12 14 12 16 12 12 12 ...
##  $ hearnw    : num  3.35 1.39 4.55 1.1 4.59 ...
##  $ wagew     : num  2.65 2.65 4.04 3.25 3.6 4.7 5.95 9.98 0 4.15 ...
##  $ hoursh    : int  2708 2310 3072 1920 2000 1040 2670 4120 1995 2100 ...
##  $ ageh      : int  34 30 40 53 32 57 37 53 52 43 ...
##  $ educh     : int  12 9 12 10 12 11 12 8 4 12 ...
##  $ wageh     : num  4.03 8.44 3.58 3.54 10 ...
##  $ income    : int  16310 21800 21040 7300 27300 19495 21152 18900 20405 20425 ...
##  $ educwm    : int  12 7 12 7 12 14 14 3 7 7 ...
##  $ educwf    : int  7 7 7 7 14 7 7 3 7 7 ...
##  $ unemprate : num  5 11 5 5 9.5 7.5 5 5 3 5 ...
##  $ city      : Factor w/ 2 levels "no","yes": 1 2 1 1 2 2 1 1 1 1 ...
##  $ experience: int  14 5 15 6 7 33 11 35 24 21 ...

Our first step is to divide our dataset into a train and test set. We will do a simple 50/50 split for this demonstration.

train<-sample(c(T,F),nrow(Mroz),rep=T) #50/50 train/test split
test<-(!train)

In the code above we use the “sample” function to create a “train” index based on the number of rows in the “Mroz” dataset. Basically, R is making a vector that randomly assigns different rows in the “Mroz” dataset to be marked as True or False. Next, we use the “train” vector and we assign everything or every number that is not in the “train” vector to the test vector by using the exclamation mark.

We are now ready to develop our model. Below is the code

set.seed(777)
pcr.fit<-pcr(income~.,data=Mroz,subset=train,scale=T,validation="CV")

To make our model we use the “pcr” function from the “pls” package. The “subset” argument tells r to use the “train” vector to select examples from the “Mroz” dataset. The “scale” argument makes sure everything is measured the same way. This is important when using a component analysis tool as variables with different scale have a different influence on the components. Lastly, the “validation” argument enables cross-validation. This will help us to determine the number of components to use for prediction. Below is the results of the model using the “summary” function.

summary(pcr.fit)
## Data:    X dimension: 381 17 
##  Y dimension: 381 1
## Fit method: svdpc
## Number of components considered: 17
## 
## VALIDATION: RMSEP
## Cross-validated using 10 random segments.
##        (Intercept)  1 comps  2 comps  3 comps  4 comps  5 comps  6 comps
## CV           12102    11533    11017     9863     9884     9524     9563
## adjCV        12102    11534    11011     9855     9878     9502     9596
##        7 comps  8 comps  9 comps  10 comps  11 comps  12 comps  13 comps
## CV        9149     9133     8811      8527      7265      7234      7120
## adjCV     9126     9123     8798      8877      7199      7172      7100
##        14 comps  15 comps  16 comps  17 comps
## CV         7118      7141      6972      6992
## adjCV      7100      7123      6951      6969
## 
## TRAINING: % variance explained
##         1 comps  2 comps  3 comps  4 comps  5 comps  6 comps  7 comps
## X        21.359    38.71    51.99    59.67    65.66    71.20    76.28
## income    9.927    19.50    35.41    35.63    41.28    41.28    46.75
##         8 comps  9 comps  10 comps  11 comps  12 comps  13 comps  14 comps
## X         80.70    84.39     87.32     90.15     92.65     95.02     96.95
## income    47.08    50.98     51.73     68.17     68.29     68.31     68.34
##         15 comps  16 comps  17 comps
## X          98.47     99.38    100.00
## income     68.48     70.29     70.39

There is a lot of information here.The VALIDATION: RMSEP section gives you the root mean squared error of the model broken down by component. The TRAINING section is similar the printout of any PCA but it shows the amount of cumulative variance of the components, as well as the variance, explained for the dependent variable “income.” In this model, we are able to explain up to 70% of the variance if we use all 17 components.

We can graph the MSE using the “validationplot” function with the argument “val.type” set to “MSEP”. The code is below.

validationplot(pcr.fit,val.type = "MSEP")

1

How many components to pick is subjective, however, there is almost no improvement beyond 13 so we will use 13 components in our prediction model and we will calculate the means squared error.

set.seed(777)
pcr.pred<-predict(pcr.fit,Mroz[test,],ncomp=13)
mean((pcr.pred-Mroz$income[test])^2)
## [1] 48958982

MSE is what you would use to compare this model to other models that you developed. Below is the performance of a least squares regression model

set.seed(777)
lm.fit<-lm(income~.,data=Mroz,subset=train)
lm.pred<-predict(lm.fit,Mroz[test,])
mean((lm.pred-Mroz$income[test])^2)
## [1] 47794472

If you compare the MSE the least squares model performs slightly better than the PCR one. However, there are a lot of non-significant features in the model as shown below.

summary(lm.fit)
## 
## Call:
## lm(formula = income ~ ., data = Mroz, subset = train)
## 
## Residuals:
##    Min     1Q Median     3Q    Max 
## -27646  -3337  -1387   1860  48371 
## 
## Coefficients:
##               Estimate Std. Error t value Pr(>|t|)    
## (Intercept) -2.215e+04  3.987e+03  -5.556 5.35e-08 ***
## workno      -3.828e+03  1.316e+03  -2.909  0.00385 ** 
## hoursw       3.955e+00  7.085e-01   5.582 4.65e-08 ***
## child6       5.370e+02  8.241e+02   0.652  0.51512    
## child618     4.250e+02  2.850e+02   1.491  0.13673    
## agew         1.962e+02  9.849e+01   1.992  0.04709 *  
## educw        1.097e+02  2.276e+02   0.482  0.63013    
## hearnw       9.835e+02  2.303e+02   4.270 2.50e-05 ***
## wagew        2.292e+02  2.423e+02   0.946  0.34484    
## hoursh       6.386e+00  6.144e-01  10.394  < 2e-16 ***
## ageh        -1.284e+01  9.762e+01  -0.132  0.89542    
## educh        1.460e+02  1.592e+02   0.917  0.35982    
## wageh        2.083e+03  9.930e+01  20.978  < 2e-16 ***
## educwm       1.354e+02  1.335e+02   1.014  0.31115    
## educwf       1.653e+02  1.257e+02   1.315  0.18920    
## unemprate   -1.213e+02  1.148e+02  -1.057  0.29140    
## cityyes     -2.064e+02  7.905e+02  -0.261  0.79421    
## experience  -1.165e+02  5.393e+01  -2.159  0.03147 *  
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 6729 on 363 degrees of freedom
## Multiple R-squared:  0.7039, Adjusted R-squared:   0.69 
## F-statistic: 50.76 on 17 and 363 DF,  p-value: < 2.2e-16

Removing these and the MSE is almost the same for the PCR and least square models

set.seed(777)
lm.fit2<-lm(income~work+hoursw+hearnw+hoursh+wageh,data=Mroz,subset=train)
lm.pred2<-predict(lm.fit2,Mroz[test,])
mean((lm.pred2-Mroz$income[test])^2)
## [1] 47968996

Conclusion

Since the least squares model is simpler it is probably the superior model. PCR is strongest when there are a lot of variables involve and if there are issues with multicollinearity.

Example of Best Subset Regression in R

This post will provide an example of best subset regression. This is a topic that has been covered before in this blog. However, in the current post, we will approach this using a slightly different coding and a different dataset. We will be using the “HI” dataset from the “Ecdat” package. Our goal will be to predict the number of hours a women works based on the other variables in the dataset. Below is some initial code.

library(leaps);library(Ecdat)
data(HI)
str(HI)
## 'data.frame':    22272 obs. of  13 variables:
##  $ whrswk    : int  0 50 40 40 0 40 40 25 45 30 ...
##  $ hhi       : Factor w/ 2 levels "no","yes": 1 1 2 1 2 2 2 1 1 1 ...
##  $ whi       : Factor w/ 2 levels "no","yes": 1 2 1 2 1 2 1 1 2 1 ...
##  $ hhi2      : Factor w/ 2 levels "no","yes": 1 1 2 2 2 2 2 1 1 2 ...
##  $ education : Ord.factor w/ 6 levels "<9years"<"9-11years"<..: 4 4 3 4 2 3 5 3 5 4 ...
##  $ race      : Factor w/ 3 levels "white","black",..: 1 1 1 1 1 1 1 1 1 1 ...
##  $ hispanic  : Factor w/ 2 levels "no","yes": 1 1 1 1 1 1 1 1 1 1 ...
##  $ experience: num  13 24 43 17 44.5 32 14 1 4 7 ...
##  $ kidslt6   : int  2 0 0 0 0 0 0 1 0 1 ...
##  $ kids618   : int  1 1 0 1 0 0 0 0 0 0 ...
##  $ husby     : num  12 1.2 31.3 9 0 ...
##  $ region    : Factor w/ 4 levels "other","northcentral",..: 2 2 2 2 2 2 2 2 2 2 ...
##  $ wght      : int  214986 210119 219955 210317 219955 208148 213615 181960 214874 214874 ...

To develop a model we use the “regsubset” function from the “leap” package. Most of the coding is the same as linear regression. The only difference is the “nvmax” argument which is set to 13. The default setting for “nvmax” is 8. This is good if you only have 8 variables. However, the results from the “str” function indicate that we have 13 functions. Therefore, we need to set the “nvmax” argument to 13 instead of the default value of 8 in order to be sure to include all variables. Below is the code

regfit.full<-regsubsets(whrswk~.,HI, nvmax = 13)

We can look at the results with the “summary” function. For space reasons, the code is shown but the results will not be shown here.

summary(regfit.full)

If you run the code above in your computer you will 13 columns that are named after the variables created. A star in a column means that that variable is included in the model. To the left is the numbers 1-13 which. One means one variable in the model two means two variables in the model etc.

Our next step is to determine which of these models is the best. First, we need to decide what our criteria for inclusion will be. Below is a list of available fit indices.

names(summary(regfit.full))
## [1] "which"  "rsq"    "rss"    "adjr2"  "cp"     "bic"    "outmat" "obj"

For our purposes, we will use “rsq” (r-square) and “bic” “Bayesian Information Criteria.” In the code below we are going to save the values for these two fit indices in their own objects.

rsq<-summary(regfit.full)$rsq
bic<-summary(regfit.full)$bic

Now let’s plot them

plot(rsq,type='l',main="R-Square",xlab="Number of Variables")

1

plot(bic,type='l',main="BIC",xlab="Number of Variables")

1.png

You can see that for r-square the values increase and for BIC the values decrease. We will now make both of these plots again but we will have r tell the optimal number of variables when considering each model index. For we use the “which” function to determine the max r-square and the minimum BIC

which.max(rsq)
## [1] 13
which.min(bic)
## [1] 12

The model with the best r-square is the one with 13 variables. This makes sense as r-square always improves as you add variables. Since this is a demonstration we will not correct for this. For BIC the lowest values was for 12 variables. We will now plot this information and highlight the best model in the plot using the “points” function, which allows you to emphasis one point in a graph

plot(rsq,type='l',main="R-Square with Best Model Highlighted",xlab="Number of Variables")
points(13,(rsq[13]),col="blue",cex=7,pch=20)

1.png

plot(bic,type='l',main="BIC with Best Model Highlighted",xlab="Number of Variables")
points(12,(bic[12]),col="blue",cex=7,pch=20)

1.png

Since BIC calls for only 12 variables it is simpler than the r-square recommendation of 13. Therefore, we will fit our final model using the BIC recommendation of 12. Below is the code.

coef(regfit.full,12)
##        (Intercept)             hhiyes             whiyes 
##        30.31321796         1.16940604        18.25380263 
##        education.L        education^4        education^5 
##         6.63847641         1.54324869        -0.77783663 
##          raceblack        hispanicyes         experience 
##         3.06580207        -1.33731802        -0.41883100 
##            kidslt6            kids618              husby 
##        -6.02251640        -0.82955827        -0.02129349 
## regionnorthcentral 
##         0.94042820

So here is our final model. This is what we would use for our test set.

Conclusion

Best subset regression provides the researcher with insights into every possible model as well as clues as to which model is at least statistically superior. This knowledge can be used for developing models for data science applications.

High Dimensionality Regression

There are times when least squares regression is not able to provide accurate predictions or explanation in an object. One example in which least scares regression struggles with a small sample size. By small, we mean when the total number of variables is greater than the sample size. Another term for this is high dimensions which means more variables than examples in the dataset

This post will explain the consequences of what happens when high dimensions is a problem and also how to address the problem.

Inaccurate measurements

One problem with high dimensions in regression is that the results for the various metrics are overfitted to the data. Below is an example using the “attitude” dataset. There are 2 variables and 3 examples for developing a model. This is not strictly high dimensions but it is an example of a small sample size.

data("attitude")
reg1 <- lm(complaints[1:3]~rating[1:3],data=attitude[1:3]) 
summary(reg1)
## 
## Call:
## lm(formula = complaints[1:3] ~ rating[1:3], data = attitude[1:3])
## 
## Residuals:
##       1       2       3 
##  0.1026 -0.3590  0.2564 
## 
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)  
## (Intercept) 21.95513    1.33598   16.43   0.0387 *
## rating[1:3]  0.67308    0.02221   30.31   0.0210 *
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 0.4529 on 1 degrees of freedom
## Multiple R-squared:  0.9989, Adjusted R-squared:  0.9978 
## F-statistic: 918.7 on 1 and 1 DF,  p-value: 0.021

With only 3 data points the fit is perfect. You can also examine the mean squared error of the model. Below is a function for this followed by the results

mse <- function(sm){ 
        mean(sm$residuals^2)}
mse(reg1)
## [1] 0.06837607

Almost no error. Lastly, let’s look at a visual of the model

with(attitude[1:3],plot(complaints[1:3]~ rating[1:3]))
title(main = "Sample Size 3")
abline(lm(complaints[1:3]~rating[1:3],data = attitude))

1.png

You can see that the regression line goes almost perfectly through each data point. If we tried to use this model on the test set in a real data science problem there would be a huge amount of bias. Now we will rerun the analysis this time with the full sample.

reg2<- lm(complaints~rating,data=attitude) 
summary(reg2)
## 
## Call:
## lm(formula = complaints ~ rating, data = attitude)
## 
## Residuals:
##      Min       1Q   Median       3Q      Max 
## -13.3880  -6.4553  -0.2997   6.1462  13.3603 
## 
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)    
## (Intercept)   8.2445     7.6706   1.075    0.292    
## rating        0.9029     0.1167   7.737 1.99e-08 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 7.65 on 28 degrees of freedom
## Multiple R-squared:  0.6813, Adjusted R-squared:  0.6699 
## F-statistic: 59.86 on 1 and 28 DF,  p-value: 1.988e-08

You can clearly see a huge reduction in the r-square from .99 to .68. Next, is the mean-square error

mse(reg2)
## [1] 54.61425

The error has increased a great deal. Lastly, we fit the regression line

with(attitude,plot(complaints~ rating))
title(main = "Full Sample Size")
abline(lm(complaints~rating,data = attitude))

1.png

Naturally, the second model is more likely to perform better with a test set. The problem is that least squares regression is too flexible when the number of features is greater than or equal to the number of examples in a dataset.

What to Do?

If least squares regression must be used. One solution to overcoming high dimensionality is to use some form of regularization regression such as ridge, lasso, or elastic net. Any of these regularization approaches will help to reduce the number of variables or dimensions in the final model through the use of shrinkage.

However, keep in mind that no matter what you do as the number of dimensions increases so does the r-square even if the variable is useless. This is known as the curse of dimensionality. Again, regularization can help with this.

Remember that with a large number of dimensions there are normally several equally acceptable models. To determine which is most useful depends on understanding the problem and context of the study.

Conclusion

With the ability to collect huge amounts of data has led to the growing problem of high dimensionality. One there are more features than examples it can lead to statistical errors. However, regularization is one tool for dealing with this problem.

Leave One Out Cross Validation in R

Leave one out cross validation. (LOOCV) is a variation of the validation approach in that instead of splitting the dataset in half, LOOCV uses one example as the validation set and all the rest as the training set. This helps to reduce bias and randomness in the results but unfortunately, can increase variance. Remember that the goal is always to reduce the error rate which is often calculated as the mean-squared error.

In this post, we will use the “Hedonic” dataset from the “Ecdat” package to assess several different models that predict the taxes of homes In order to do this, we will also need to use the “boot” package. Below is the code.

library(Ecdat);library(boot)
data(Hedonic)
str(Hedonic)
## 'data.frame':    506 obs. of  15 variables:
##  $ mv     : num  10.09 9.98 10.45 10.42 10.5 ...
##  $ crim   : num  0.00632 0.02731 0.0273 0.03237 0.06905 ...
##  $ zn     : num  18 0 0 0 0 0 12.5 12.5 12.5 12.5 ...
##  $ indus  : num  2.31 7.07 7.07 2.18 2.18 ...
##  $ chas   : Factor w/ 2 levels "no","yes": 1 1 1 1 1 1 1 1 1 1 ...
##  $ nox    : num  28.9 22 22 21 21 ...
##  $ rm     : num  43.2 41.2 51.6 49 51.1 ...
##  $ age    : num  65.2 78.9 61.1 45.8 54.2 ...
##  $ dis    : num  1.41 1.6 1.6 1.8 1.8 ...
##  $ rad    : num  0 0.693 0.693 1.099 1.099 ...
##  $ tax    : int  296 242 242 222 222 222 311 311 311 311 ...
##  $ ptratio: num  15.3 17.8 17.8 18.7 18.7 ...
##  $ blacks : num  0.397 0.397 0.393 0.395 0.397 ...
##  $ lstat  : num  -3 -2.39 -3.21 -3.53 -2.93 ...
##  $ townid : int  1 2 2 3 3 3 4 4 4 4 ...

First, we need to develop our basic least squares regression model. We will do this with the “glm” function. This is because the “cv.glm” function (more on this later) only works when models are developed with the “glm” function. Below is the code.

tax.glm<-glm(tax ~ mv+crim+zn+indus+chas+nox+rm+age+dis+rad+ptratio+blacks+lstat, data = Hedonic)

We now need to calculate the MSE. To do this we will use the “cv.glm” function. Below is the code.

cv.error<-cv.glm(Hedonic,tax.glm)
cv.error$delta
## [1] 4536.345 4536.075

cv.error$delta contains two numbers. The first is the MSE for the training set and the second is the error for the LOOCV. As you can see the numbers are almost identical.

We will now repeat this process but with the inclusion of different polynomial models. The code for this is a little more complicated and is below.

cv.error=rep(0,5)
for (i in 1:5){
        tax.loocv<-glm(tax ~ mv+poly(crim,i)+zn+indus+chas+nox+rm+poly(age,i)+dis+rad+ptratio+blacks+lstat, data = Hedonic)
        cv.error[i]=cv.glm(Hedonic,tax.loocv)$delta[1]
}
cv.error
## [1] 4536.345 4515.464 4710.878 7047.097 9814.748

Here is what happen.

  1. First, we created an empty object called “cv.error” with five empty spots, which we will use to store information later.
  2. Next, we created a for loop that repeats 5 times
  3. Inside the for loop, we create the same regression model except we added the “poly” function in front of “age”” and also “crim”. These are the variables we want to try polynomials 1-5 one to see if it reduces the error.
  4. The results of the polynomial models are stored in the “cv.error” object and we specifically request the results of “delta” Finally, we printed “cv.error” to the console.

From the results, you can see that the error decreases at a second order polynomial but then increases after that. This means that high order polynomials are not beneficial generally.

Conclusion

LOOCV is another option in assessing different models and determining which is most appropriate. As such, this is a tool that is used by many data scientist.

Validation Set for Regression in R

Estimating error and looking for ways to reduce it is a key component of machine learning. In this post, we will look at a simple way of addressing this problem through the use of the validation set method.

The validation set method is a standard approach in model development. To put it simply, you divide your dataset into a training and a hold-out set. The model is developed on the training set and then the hold-out set is used for prediction purposes. The error rate of the hold-out set is assumed to be reflective of the test error rate.

In the example below, we will use the “Carseats” dataset from the “ISLR” package. Our goal is to predict the competitors’ price for a carseat based on the other available variables. Below is some initial code

library(ISLR)
data("Carseats")
str(Carseats)
## 'data.frame':    400 obs. of  11 variables:
##  $ Sales      : num  9.5 11.22 10.06 7.4 4.15 ...
##  $ CompPrice  : num  138 111 113 117 141 124 115 136 132 132 ...
##  $ Income     : num  73 48 35 100 64 113 105 81 110 113 ...
##  $ Advertising: num  11 16 10 4 3 13 0 15 0 0 ...
##  $ Population : num  276 260 269 466 340 501 45 425 108 131 ...
##  $ Price      : num  120 83 80 97 128 72 108 120 124 124 ...
##  $ ShelveLoc  : Factor w/ 3 levels "Bad","Good","Medium": 1 2 3 3 1 1 3 2 3 3 ...
##  $ Age        : num  42 65 59 55 38 78 71 67 76 76 ...
##  $ Education  : num  17 10 12 14 13 16 15 10 10 17 ...
##  $ Urban      : Factor w/ 2 levels "No","Yes": 2 2 2 2 2 1 2 2 1 1 ...
##  $ US         : Factor w/ 2 levels "No","Yes": 2 2 2 2 1 2 1 2 1 2 ...

We need to divide our dataset into two part. One will be the training set and the other the hold-out set. Below is the code.

set.seed(7)
train<-sample(x=400,size=200)

Now, for those who are familiar with R you know that we haven’t actually made our training set. We are going to use the “train” object to index items from the “Carseat” dataset. What we did was set the seed so that the results can be replicated. Then we used the “sample” function using two arguments “x” and “size”. X represents the number of examples in the “Carseat” dataset. Size represents how big we want the sample to be. In other words, we want a sample size of 200 of the 400 examples to be in the training set. Therefore, R will randomly select 200 numbers from 400.

We will now fit our initial model

car.lm<-lm(CompPrice ~ Income+Sales+Advertising+Population+Price+ShelveLoc+Age+Education+Urban, data = Carseats,subset = train)

The code above should not be new. However, one unique twist is the use of the “subset” argument. What this argument does is tell R to only use rows that are in the “train” index. Next, we calculate the mean squared error.

mean((Carseats$CompPrice-predict(car.lm,Carseats))[-train]^2)
## [1] 77.13932

Here is what the code above means

  1. We took the “CompPrice” results and subtracted them from the prediction made by the “car.lm” model we developed.
  2. Used the test set which here is identified as “-train” minus means everything that is not in the “train”” index
  3. the results were squared.

The results here are the baseline comparison. We will now make two more models each with a polynomial in one of the variables. First, we will square the “income” variable

car.lm2<-lm(CompPrice ~ Income+Sales+Advertising+Population+I(Income^2)+Price+ShelveLoc+Age+Education+Urban, data = Carseats,subset = train)
mean((Carseats$CompPrice-predict(car.lm2,Carseats))[-train]^2)
## [1] 75.68999

You can see that there is a small decrease in the MSE. Also, notice the use of the “I” function which allows us to square “income”. Now, let’s try a cubic model

car.lm3<-lm(CompPrice ~ Income+Sales+Advertising+Population+I(Income^3)+Price+ShelveLoc+Age+Education+Urban, data = Carseats,subset = train)
mean((Carseats$CompPrice-predict(car.lm3,Carseats))[-train]^2)
## [1] 75.84575

This time there was an increase when compared to the second model. As such, higher order polynomials will probably not improve the model.

Conclusion

This post provided a simple example of assessing several different models use the validation approach. However, in practice, this approach is not used as frequently as there are so many more ways to do this now. Yet, it is still good to be familiar with a standard approach such as this.

Additive Assumption and Multiple Regression

In regression, one of the assumptions is the additive assumption. This assumption states that the influence of a predictor variable on the dependent variable is independent of any other influence. However, in practice, it is common that this assumption does not hold.

In this post, we will look at how to address violations of the additive assumption through the use of interactions in a regression model.

An interaction effect is when you have two predictor variables whose effect on the dependent variable is not the same. As such, their effect must be considered simultaneously rather than separately. This is done through the use of an interaction term. An interaction term is the product of the two predictor variables.

Let’s begin by making a regular regression model with an interaction. To do this we will use the “Carseats” data from the “ISLR” package to predict “Sales”. Below is the code.

library(ISLR);library(ggplot2)
data(Carseats)
saleslm<-lm(Sales~.,Carseats)
summary(saleslm)
## 
## Call:
## lm(formula = Sales ~ ., data = Carseats)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -2.8692 -0.6908  0.0211  0.6636  3.4115 
## 
## Coefficients:
##                   Estimate Std. Error t value Pr(>|t|)    
## (Intercept)      5.6606231  0.6034487   9.380  < 2e-16 ***
## CompPrice        0.0928153  0.0041477  22.378  < 2e-16 ***
## Income           0.0158028  0.0018451   8.565 2.58e-16 ***
## Advertising      0.1230951  0.0111237  11.066  < 2e-16 ***
## Population       0.0002079  0.0003705   0.561    0.575    
## Price           -0.0953579  0.0026711 -35.700  < 2e-16 ***
## ShelveLocGood    4.8501827  0.1531100  31.678  < 2e-16 ***
## ShelveLocMedium  1.9567148  0.1261056  15.516  < 2e-16 ***
## Age             -0.0460452  0.0031817 -14.472  < 2e-16 ***
## Education       -0.0211018  0.0197205  -1.070    0.285    
## UrbanYes         0.1228864  0.1129761   1.088    0.277    
## USYes           -0.1840928  0.1498423  -1.229    0.220    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 1.019 on 388 degrees of freedom
## Multiple R-squared:  0.8734, Adjusted R-squared:  0.8698 
## F-statistic: 243.4 on 11 and 388 DF,  p-value: < 2.2e-16

The results are rather excellent for the social sciences. The model explains 87.3% of the variance in “Sales”. The current results that we have are known as main effects. These are effects that directly influence the dependent variable. Most regression models only include main effects.

We will now examine an interaction effect between two continuous variables. Let’s see if there is an interaction between “Population” and “Income”.

saleslm1<-lm(Sales~CompPrice+Income+Advertising+Population+Price+Age+Education+US+
                     Urban+ShelveLoc+Population*Income, Carseats)
summary(saleslm1)
## 
## Call:
## lm(formula = Sales ~ CompPrice + Income + Advertising + Population + 
##     Price + Age + Education + US + Urban + ShelveLoc + Population * 
##     Income, data = Carseats)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -2.8699 -0.7624  0.0139  0.6763  3.4344 
## 
## Coefficients:
##                     Estimate Std. Error t value Pr(>|t|)    
## (Intercept)        6.195e+00  6.436e-01   9.625   <2e-16 ***
## CompPrice          9.262e-02  4.126e-03  22.449   <2e-16 ***
## Income             7.973e-03  3.869e-03   2.061   0.0400 *  
## Advertising        1.237e-01  1.107e-02  11.181   <2e-16 ***
## Population        -1.811e-03  9.524e-04  -1.901   0.0580 .  
## Price             -9.511e-02  2.659e-03 -35.773   <2e-16 ***
## Age               -4.566e-02  3.169e-03 -14.409   <2e-16 ***
## Education         -2.157e-02  1.961e-02  -1.100   0.2722    
## USYes             -2.160e-01  1.497e-01  -1.443   0.1498    
## UrbanYes           1.330e-01  1.124e-01   1.183   0.2375    
## ShelveLocGood      4.859e+00  1.523e-01  31.901   <2e-16 ***
## ShelveLocMedium    1.964e+00  1.255e-01  15.654   <2e-16 ***
## Income:Population  2.879e-05  1.253e-05   2.298   0.0221 *  
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 1.013 on 387 degrees of freedom
## Multiple R-squared:  0.8751, Adjusted R-squared:  0.8712 
## F-statistic:   226 on 12 and 387 DF,  p-value: < 2.2e-16

The new contribution is at the bottom of the coefficient table and is the “Income:Population” coefficient. What this means is “the increase of Sales given a one unit increase in Income and Population simultaneously” In other words the “Income:Population” coefficient looks at their combined simultaneous effect on Sales rather than just their independent effect on Sales.

This makes practical sense as well. The larger the population the more available income and vice versa. However, for our current model, the improvement in the r-squared is relatively small. The actual effect is a small increase in sales. Below is a graph of income and population by sales. Notice how the lines cross. This is a visual of what an interaction looks like. The lines are not parallel by any means.

ggplot(data=Carseats, aes(x=Income, y=Sales, group=1)) +geom_smooth(method=lm,se=F)+ 
        geom_smooth(aes(Population,Sales), method=lm, se=F,color="black")+xlab("Income and Population")+labs(
                title="Income in Blue Population in Black")

We will now repeat this process but this time using a categorical variable and a continuous variable. We will look at the interaction between “US” location (categorical) and “Advertising” (continuous).

saleslm2<-lm(Sales~CompPrice+Income+Advertising+Population+Price+Age+Education+US+
                     Urban+ShelveLoc+US*Advertising, Carseats)
summary(saleslm2)
## 
## Call:
## lm(formula = Sales ~ CompPrice + Income + Advertising + Population + 
##     Price + Age + Education + US + Urban + ShelveLoc + US * Advertising, 
##     data = Carseats)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -2.8531 -0.7140  0.0266  0.6735  3.3773 
## 
## Coefficients:
##                     Estimate Std. Error t value Pr(>|t|)    
## (Intercept)        5.6995305  0.6023074   9.463  < 2e-16 ***
## CompPrice          0.0926214  0.0041384  22.381  < 2e-16 ***
## Income             0.0159111  0.0018414   8.641  < 2e-16 ***
## Advertising        0.2130932  0.0530297   4.018 7.04e-05 ***
## Population         0.0001540  0.0003708   0.415   0.6782    
## Price             -0.0954623  0.0026649 -35.823  < 2e-16 ***
## Age               -0.0463674  0.0031789 -14.586  < 2e-16 ***
## Education         -0.0233500  0.0197122  -1.185   0.2369    
## USYes             -0.1057320  0.1561265  -0.677   0.4987    
## UrbanYes           0.1191653  0.1127047   1.057   0.2910    
## ShelveLocGood      4.8726025  0.1532599  31.793  < 2e-16 ***
## ShelveLocMedium    1.9665296  0.1259070  15.619  < 2e-16 ***
## Advertising:USYes -0.0933384  0.0537807  -1.736   0.0834 .  
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 1.016 on 387 degrees of freedom
## Multiple R-squared:  0.8744, Adjusted R-squared:  0.8705 
## F-statistic: 224.5 on 12 and 387 DF,  p-value: < 2.2e-16

Again, you can see that when the store is in the US you have to also consider the advertising budget as well. When these two variables are considered there is a slight decline in sales. What this means in practice is that advertising in the US is not as beneficial as advertising outside the US.

Below you can again see a visual of the interaction effect when the lines for US yes and no cross each other in the plot below.

ggplot(data=Carseats, aes(x=Advertising, y=Sales, group = US, colour = US)) +
        geom_smooth(method=lm,se=F)+scale_x_continuous(limits = c(0, 25))+scale_y_continuous(limits = c(0, 25))

Lastly, we will look at an interaction effect for two categorical variables.

saleslm3<-lm(Sales~CompPrice+Income+Advertising+Population+Price+Age+Education+US+
                     Urban+ShelveLoc+ShelveLoc*US, Carseats)
summary(saleslm3)
## 
## Call:
## lm(formula = Sales ~ CompPrice + Income + Advertising + Population + 
##     Price + Age + Education + US + Urban + ShelveLoc + ShelveLoc * 
##     US, data = Carseats)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -2.8271 -0.6839  0.0213  0.6407  3.4537 
## 
## Coefficients:
##                         Estimate Std. Error t value Pr(>|t|)    
## (Intercept)            5.8120748  0.6089695   9.544   <2e-16 ***
## CompPrice              0.0929370  0.0041283  22.512   <2e-16 ***
## Income                 0.0158793  0.0018378   8.640   <2e-16 ***
## Advertising            0.1223281  0.0111143  11.006   <2e-16 ***
## Population             0.0001899  0.0003721   0.510   0.6100    
## Price                 -0.0952439  0.0026585 -35.826   <2e-16 ***
## Age                   -0.0459380  0.0031830 -14.433   <2e-16 ***
## Education             -0.0267021  0.0197807  -1.350   0.1778    
## USYes                 -0.3683074  0.2379400  -1.548   0.1225    
## UrbanYes               0.1438775  0.1128171   1.275   0.2030    
## ShelveLocGood          4.3491643  0.2734344  15.906   <2e-16 ***
## ShelveLocMedium        1.8967193  0.2084496   9.099   <2e-16 ***
## USYes:ShelveLocGood    0.7184116  0.3320759   2.163   0.0311 *  
## USYes:ShelveLocMedium  0.0907743  0.2631490   0.345   0.7303    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 1.014 on 386 degrees of freedom
## Multiple R-squared:  0.8753, Adjusted R-squared:  0.8711 
## F-statistic: 208.4 on 13 and 386 DF,  p-value: < 2.2e-16

In this case, we can see that when the store is in the US and the shelf location is good it has an effect on Sales when compared to a bad location. The plot below is a visual of this. However, it is harder to see this because the x-axis has only two categories

ggplot(data=Carseats, aes(x=US, y=Sales, group = ShelveLoc, colour = ShelveLoc)) +
        geom_smooth(method=lm,se=F)

Conclusion

Interactions effects are a great way to fine-tune a model, especially for explanatory purposes. Often, the change in r-square is not strong enough for prediction but can be used for nuanced understanding of the relationships among the variables.

Linear Regression vs Bayesian Regression

In this post, we are going to look at Bayesian regression. In particular, we will compare the results of ordinary least squares regression with Bayesian regression.

Bayesian Statistics

Bayesian statistics involves the use of probabilities rather than frequencies when addressing uncertainty. This allows you to determine the distribution of the model parameters and not only the values. This is done through averaging over the model parameters through marginalizing the joint probability distribution.

Linear Regression

We will now develop our two models. The first model will be a normal regression and the second a Bayesian model. We will be looking at factors that affect the tax rate of homes in the “Hedonic” dataset in the “Ecdat” package. We will load our packages and partition our data. Below is some initial code

library(ISLR);library(caret);library(arm);library(Ecdat);library(gridExtra)
data("Hedonic")
inTrain<-createDataPartition(y=Hedonic$tax,p=0.7, list=FALSE)
trainingset <- Hedonic[inTrain, ]
testingset <- Hedonic[-inTrain, ]
str(Hedonic)
## 'data.frame':    506 obs. of  15 variables:
##  $ mv     : num  10.09 9.98 10.45 10.42 10.5 ...
##  $ crim   : num  0.00632 0.02731 0.0273 0.03237 0.06905 ...
##  $ zn     : num  18 0 0 0 0 0 12.5 12.5 12.5 12.5 ...
##  $ indus  : num  2.31 7.07 7.07 2.18 2.18 ...
##  $ chas   : Factor w/ 2 levels "no","yes": 1 1 1 1 1 1 1 1 1 1 ...
##  $ nox    : num  28.9 22 22 21 21 ...
##  $ rm     : num  43.2 41.2 51.6 49 51.1 ...
##  $ age    : num  65.2 78.9 61.1 45.8 54.2 ...
##  $ dis    : num  1.41 1.6 1.6 1.8 1.8 ...
##  $ rad    : num  0 0.693 0.693 1.099 1.099 ...
##  $ tax    : int  296 242 242 222 222 222 311 311 311 311 ...
##  $ ptratio: num  15.3 17.8 17.8 18.7 18.7 ...
##  $ blacks : num  0.397 0.397 0.393 0.395 0.397 ...
##  $ lstat  : num  -3 -2.39 -3.21 -3.53 -2.93 ...
##  $ townid : int  1 2 2 3 3 3 4 4 4 4 ...

We will now create our regression model

ols.reg<-lm(tax~.,trainingset)
summary(ols.reg)
## 
## Call:
## lm(formula = tax ~ ., data = trainingset)
## 
## Residuals:
##      Min       1Q   Median       3Q      Max 
## -180.898  -35.276    2.731   33.574  200.308 
## 
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)    
## (Intercept) 305.1928   192.3024   1.587  0.11343    
## mv          -41.8746    18.8490  -2.222  0.02697 *  
## crim          0.3068     0.6068   0.506  0.61339    
## zn            1.3278     0.2006   6.618 1.42e-10 ***
## indus         7.0685     0.8786   8.045 1.44e-14 ***
## chasyes     -17.0506    15.1883  -1.123  0.26239    
## nox           0.7005     0.4797   1.460  0.14518    
## rm           -0.1840     0.5875  -0.313  0.75431    
## age           0.3054     0.2265   1.349  0.17831    
## dis          -7.4484    14.4654  -0.515  0.60695    
## rad          98.9580     6.0964  16.232  < 2e-16 ***
## ptratio       6.8961     2.1657   3.184  0.00158 ** 
## blacks      -29.6389    45.0043  -0.659  0.51061    
## lstat       -18.6637    12.4674  -1.497  0.13532    
## townid        1.1142     0.1649   6.758 6.07e-11 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 63.72 on 341 degrees of freedom
## Multiple R-squared:  0.8653, Adjusted R-squared:  0.8597 
## F-statistic: 156.4 on 14 and 341 DF,  p-value: < 2.2e-16

The model does a reasonable job. Next, we will do our prediction and compare the results with the test set using correlation, summary statistics, and the mean absolute error. In the code below, we use the “predict.lm” function and include the arguments “interval” for the prediction as well as “se.fit” for the standard error

ols.regTest<-predict.lm(ols.reg,testingset,interval = 'prediction',se.fit = T)

Below is the code for the correlation, summary stats, and mean absolute error. For MAE, we need to create a function.

cor(testingset$tax,ols.regTest$fit[,1])
## [1] 0.9313795
summary(ols.regTest$fit[,1])
##    Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
##   144.7   288.3   347.6   399.4   518.4   684.1
summary(trainingset$tax)
##    Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
##   188.0   279.0   330.0   410.4   666.0   711.0
MAE<-function(actual, predicted){
        mean(abs(actual-predicted))
}
MAE(ols.regTest$fit[,1], testingset$tax)
## [1] 41.07212

The correlation is excellent. The summary stats are similar and the error is not unreasonable. Below is a plot of the actual and predicted values

We now need to combine some data into one dataframe. In particular, we need the following actual dependent variable results predicted dependent variable results The upper confidence value of the prediction THe lower confidence value of the prediction

The code is below

yout.ols <- as.data.frame(cbind(testingset$tax,ols.regTest$fit))
ols.upr <- yout.ols$upr
ols.lwr <- yout.ols$lwr

We can now plot this

p.ols <- ggplot(data = yout.ols, aes(x = testingset$tax, y = ols.regTest$fit[,1])) + geom_point() + ggtitle("Ordinary Regression") + labs(x = "Actual", y = "Predicted")
p.ols + geom_errorbar(ymin = ols.lwr, ymax = ols.upr)

1.png

You can see the strong linear relationship. However, the confidence intervals are rather wide. Let’s see how Bayes does.

Bayes Regression

Bayes regression uses the “bayesglm” function from the “arm” package. We need to set the family to “gaussian” and the link to “identity”. In addition, we have to set the “prior.df” (prior degrees of freedom) to infinity as this indicates we want a normal distribution

bayes.reg<-bayesglm(tax~.,family=gaussian(link=identity),trainingset,prior.df = Inf)
bayes.regTest<-predict.glm(bayes.reg,newdata = testingset,se.fit = T)
cor(testingset$tax,bayes.regTest$fit)
## [1] 0.9313793
summary(bayes.regTest$fit)
##    Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
##   144.7   288.3   347.5   399.4   518.4   684.1
summary(trainingset$tax)
##    Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
##   188.0   279.0   330.0   410.4   666.0   711.0
MAE(bayes.regTest$fit, testingset$tax)
## [1] 41.07352

The numbers are essentially the same. This leads to the question of what is the benefit of Bayesian regression? The answer is in the confidence intervals. Next, we will calculate the confidence intervals for the Bayesian model.

yout.bayes <- as.data.frame(cbind(testingset$tax,bayes.regTest$fit))
names(yout.bayes) <- c("tax", "fit")
critval <- 1.96 #approx for 95% CI
bayes.upr <- bayes.regTest$fit + critval * bayes.regTest$se.fit
bayes.lwr <- bayes.regTest$fit - critval * bayes.regTest$se.fit

We now create our Bayesian regression plot.

p.bayes <- ggplot(data = yout.bayes, aes(x = yout.bayes$tax, y = yout.bayes$fit)) + geom_point() + ggtitle("Bayesian Regression Prediction") + labs(x = "Actual", y = "Predicted")

Lastly, we display both plots as a comparison.

ols.plot <-  p.ols + geom_errorbar(ymin = ols.lwr, ymax = ols.upr)
bayes.plot <-  p.bayes + geom_errorbar(ymin = bayes.lwr, ymax = bayes.upr)
grid.arrange(ols.plot,bayes.plot,ncol=2)

1

As you can see, the Bayesian approach gives much more compact confidence intervals. This is because the Bayesian approach a distribution of parameters is calculated from a posterior distribution. These values are then averaged to get the final prediction that appears on the plot. This reduces the variance and strengthens the confidence we can have in each individual example.

Common Task in Machine Learning

Machine learning is used for a variety of task today with a multitude of algorithms that can each do one or more of these tasks well. In this post, we will look at some of the most common tasks that machine learning algorithms perform. In particular, we will look at the following task.

  1. Regression
  2. Classification
  3. Forecasting
  4. Clustering
  5. Association rules
  6. Dimension reduction

Numbers 1-3 are examples of supervised learning, which is learning that involves a dependent variable. Numbers 4-6 are unsupervised which is learning that does not involve a clearly labeled dependent variable.

Regression

Regression involves understanding the relationship between a continuous dependent variable and categorical and continuous independent variables. Understanding this relationship allows for numeric prediction of the dependent continuous variable.

Example algorithms for regression include linear regression, numeric prediction random forest as well as support vector machines and artificial neural networks.

Classification

Classification involves the use of a categorical dependent variable with continuous and or categorical independent variables. The purpose is to classify examples into the groups in the dependent variable.

Examples of this are logisitic regression as well as all the algorithms mentioned in regression. Many algorithms can do both regression and classification.

Forecasting

Forecasting is similar to regression. However, the difference is that the data is a time series. The goal remains the same of predicting future outcomes based on current available data. As such, a slightly different approach is needed because of the type of data involved.

Common algorithms for forecasting is ARIMA even artificial neural networks.

Clustering

Clustering involves grouping together items that are similar in a dataset. This is done by detecting patterns in the data. The problem is that the number of clusters needed is usually no known in advanced which leads to a trial and error approach if there is no other theoretical support.

Common clustering algorithms include k-means and hierarchical clustering. Latent Dirichlet allocation is used often in text mining applications.

Association Rules

Associations rules find items that occur together in a dataset. A common application of association rules is market basket analysis.

Common algorithms include Apriori and frequent pattern matching algorithm.

Dimension Reduction

Dimension reduction involves combining several redundant features into one or more components that capture the majority of the variance. Reducing the number of features can increase the speed of the computation as well as reduce the risk of overfitting.

In machine learning, principal component analysis is often used for dimension reduction. However, factor analysis is sometimes used as well.

Conclusion

In machine learning, there is always an appropriate tool for the job. This post provided insight into the main task of machine learning as well as the algorithm for the situation.

Data Munging with Dplyr

Data preparation aka data munging is what most data scientist spend the majority of their time doing. Extracting and transforming data is difficult, to say the least. Every dataset is different with unique problems. This makes it hard to generalize best practices for transforming data so that it is suitable for analysis.

In this post, we will look at how to use the various functions in the “dplyr”” package. This package provides numerous ways to develop features as well as explore the data. We will use the “attitude” dataset from base r for our analysis. Below is some initial code.

library(dplyr)
data("attitude")
str(attitude)
## 'data.frame':    30 obs. of  7 variables:
##  $ rating    : num  43 63 71 61 81 43 58 71 72 67 ...
##  $ complaints: num  51 64 70 63 78 55 67 75 82 61 ...
##  $ privileges: num  30 51 68 45 56 49 42 50 72 45 ...
##  $ learning  : num  39 54 69 47 66 44 56 55 67 47 ...
##  $ raises    : num  61 63 76 54 71 54 66 70 71 62 ...
##  $ critical  : num  92 73 86 84 83 49 68 66 83 80 ...
##  $ advance   : num  45 47 48 35 47 34 35 41 31 41 ...

You can see we have seven variables and only 30 observations. Our first function that we will learn to use is the “select” function. This function allows you to select columns of data you want to use. In order to use this feature, you need to know the names of the columns you want. Therefore, we will first use the “names” function to determine the names of the columns and then use the “select”” function.

names(attitude)[1:3]
## [1] "rating"     "complaints" "privileges"
smallset<-select(attitude,rating:privileges)
head(smallset)
##   rating complaints privileges
## 1     43         51         30
## 2     63         64         51
## 3     71         70         68
## 4     61         63         45
## 5     81         78         56
## 6     43         55         49

The difference is probably obvious. Using the “select” function we have 3 instead of 7 variables. We can also exclude columns we do not want by placing a negative in front of the names of the columns. Below is the code

head(select(attitude,-(rating:privileges)))
##   learning raises critical advance
## 1       39     61       92      45
## 2       54     63       73      47
## 3       69     76       86      48
## 4       47     54       84      35
## 5       66     71       83      47
## 6       44     54       49      34

We can also use the “rename” function to change the names of columns. In our example below, we will change the name of the “rating” to “rates.” The code is below. Keep in mind that the new name for the column is to the left of the equal sign and the old name is to the right

attitude<-rename(attitude,rates=rating)
head(attitude)
##   rates complaints privileges learning raises critical advance
## 1    43         51         30       39     61       92      45
## 2    63         64         51       54     63       73      47
## 3    71         70         68       69     76       86      48
## 4    61         63         45       47     54       84      35
## 5    81         78         56       66     71       83      47
## 6    43         55         49       44     54       49      34

The “select”” function can be used in combination with other functions to find specific columns in the dataset. For example, we will use the “ends_with” function inside the “select” function to find all columns that end with the letter s.

s_set<-head(select(attitude,ends_with("s")))
s_set
##   rates complaints privileges raises
## 1    43         51         30     61
## 2    63         64         51     63
## 3    71         70         68     76
## 4    61         63         45     54
## 5    81         78         56     71
## 6    43         55         49     54

The “filter” function allows you to select rows from a dataset based on criteria. In the code below we will select only rows that have a 75 or higher in the “raises” variable.

bigraise<-filter(attitude,raises>75)
bigraise
##   rates complaints privileges learning raises critical advance
## 1    71         70         68       69     76       86      48
## 2    77         77         54       72     79       77      46
## 3    74         85         64       69     79       79      63
## 4    66         77         66       63     88       76      72
## 5    78         75         58       74     80       78      49
## 6    85         85         71       71     77       74      55

If you look closely all values in the “raise” column are greater than 75. Of course, you can have more than one criteria. IN the code below there are two.

filter(attitude, raises>70 & learning<67)
##   rates complaints privileges learning raises critical advance
## 1    81         78         56       66     71       83      47
## 2    65         70         46       57     75       85      46
## 3    66         77         66       63     88       76      72

The “arrange” function allows you to sort the order of the rows. In the code below we first sort the data ascending by the “critical” variable. Then we sort it descendingly by adding the “desc” function.

ascCritical<-arrange(attitude, critical)
head(ascCritical)
##   rates complaints privileges learning raises critical advance
## 1    43         55         49       44     54       49      34
## 2    81         90         50       72     60       54      36
## 3    40         37         42       58     50       57      49
## 4    69         62         57       42     55       63      25
## 5    50         40         33       34     43       64      33
## 6    71         75         50       55     70       66      41
descCritical<-arrange(attitude, desc(critical))
head(descCritical)
##   rates complaints privileges learning raises critical advance
## 1    43         51         30       39     61       92      45
## 2    71         70         68       69     76       86      48
## 3    65         70         46       57     75       85      46
## 4    61         63         45       47     54       84      35
## 5    81         78         56       66     71       83      47
## 6    72         82         72       67     71       83      31

The “mutate” function is useful for engineering features. In the code below we will transform the “learning” variable by subtracting its mean from its self

attitude<-mutate(attitude,learningtrend=learning-mean(learning))
head(attitude)
##   rates complaints privileges learning raises critical advance
## 1    43         51         30       39     61       92      45
## 2    63         64         51       54     63       73      47
## 3    71         70         68       69     76       86      48
## 4    61         63         45       47     54       84      35
## 5    81         78         56       66     71       83      47
## 6    43         55         49       44     54       49      34
##   learningtrend
## 1    -17.366667
## 2     -2.366667
## 3     12.633333
## 4     -9.366667
## 5      9.633333
## 6    -12.366667

You can also create logical variables with the “mutate” function.In the code below, we create a logical variable that is true when the “critical” variable” is higher than 80 and false when “critical”” is less than 80. The new variable is called “highCritical”

attitude<-mutate(attitude,highCritical=critical>=80)
head(attitude)
##   rates complaints privileges learning raises critical advance
## 1    43         51         30       39     61       92      45
## 2    63         64         51       54     63       73      47
## 3    71         70         68       69     76       86      48
## 4    61         63         45       47     54       84      35
## 5    81         78         56       66     71       83      47
## 6    43         55         49       44     54       49      34
##   learningtrend highCritical
## 1    -17.366667         TRUE
## 2     -2.366667        FALSE
## 3     12.633333         TRUE
## 4     -9.366667         TRUE
## 5      9.633333         TRUE
## 6    -12.366667        FALSE

The “group_by” function is used for creating summary statistics based on a specific variable. It is similar to the “aggregate” function in R. This function works in combination with the “summarize” function for our purposes here. We will group our data by the “highCritical” variable. This means our data will be viewed as either TRUE for “highCritical” or FALSE. The results of this function will be saved in an object called “hcgroups”

hcgroups<-group_by(attitude,highCritical)
head(hcgroups)
## # A tibble: 6 x 9
## # Groups:   highCritical [2]
##   rates complaints privileges learning raises critical advance
##                            
## 1    43         51         30       39     61       92      45
## 2    63         64         51       54     63       73      47
## 3    71         70         68       69     76       86      48
## 4    61         63         45       47     54       84      35
## 5    81         78         56       66     71       83      47
## 6    43         55         49       44     54       49      34
## # ... with 2 more variables: learningtrend , highCritical 

Looking at the data you probably saw no difference. This is because we are not done yet. We need to summarize the data in order to see the results for our two groups in the “highCritical” variable.

We will now generate the summary statistics by using the “summarize” function. We specifically want to know the mean of the “complaint” variable based on the variable “highCritical.” Below is the code

summarize(hcgroups,complaintsAve=mean(complaints))
## # A tibble: 2 x 2
##   highCritical complaintsAve
##                   
## 1        FALSE      67.31579
## 2         TRUE      65.36364

Of course, you could have learned this through doing a t.test but this is another approach.

Conclusion

The “dplyr” package is one powerful tool for wrestling with data. There is nothing new in this package. Instead, the coding is simpler than what you can excute using base r.

Analyzing Twitter Data in R

In this post, we will look at analyzing tweets from Twitter using R. Before beginning, if you plan to replicate this on your own, you will need to set up a developer account with Twitter. Below are the steps

Twitter Setup

  1. Go to https://dev.twitter.com/apps
  2. Create a twitter account if you do not already have one
  3. Next, you want to click “create new app”
  4. After entering the requested information be sure to keep the following information for R; consumer key, consumer secret, request token URL, authorize URL, access token URL

The instruction here are primarily for users of Linux. If you are using a windows machine you need to download a cecert.pem file below is the code

download.file(url=‘http://curl.haxx.se/ca/cacert.pem’,destfile=‘/YOUR_LOCATION/cacert.pem’)

You need to save this file where it is appropriate. Below we will begin the analysis by loading the appropriate libraries.

R Setup

library(twitteR);library(ROAuth);library(RCurl);library(tm);library(wordcloud)

Next, we need to use all of the application information we generate when we created the developer account at twitter. We will save the information in objects to use in R. In the example code below “XXXX” is used where you should provide your own information. Sharing this kind of information would allow others to use my twitter developer account. Below is the code

my.key<-"XXXX" #consumer key
my.secret<-"XXXX" #consumer secret
my.accesstoken<-'XXXX' #access token
my.accesssecret<-'XXXX' ##access secret

Some of the information we just stored now needs to be passed to the “OAuthFactory” function of the “ROAuth” package. We will be passing the “my.key” and “my.secret”. We also need to add the request URL, access URL, and auth URL. Below is the code for all this.

cred<-OAuthFactory$new(consumerKey=my.key,consumerSecret=my.secret,requestURL='https://api.twitter/oauth/request_token',
                       accessURL='https://api.twitter/oauth/access_token',authURL='https://api.twitter/oauth/authorize')

If you are a windows user you need to code below for the cacert.pem. You need to use the “cred$handshake(cainfo=”LOCATION OF CACERT.PEM” to complete the setup process. make sure to save your authentication and then use the “registerTwitterOAuth(cred)” to finish this. For Linux users, the code is below.

setup_twitter_oauth(my.key, my.secret, my.accesstoken, my.accesssecret)

Data Preparation

We can now begin the analysis. We are going to search twitter for the term “Data Science.” We will look for 1,500 of the most recent tweets that contain this term. To do this we will use the “searchTwitter” function. The code is as follows

ds_tweets<-searchTwitter("data science",n=1500)

We know need to some things that are a little more complicated. First, we need to convert our “ds_tweets” object to a dataframe. This is just to save our search so we don’t have to research again. The code below performs this.

ds_tweets.df<-do.call(rbind,lapply(ds_tweets,as.data.frame))

Second, we need to find all the text in our “ds_tweets” object and convert this into a list. We will use the “sapply” function along with a “getText” Below is the code

ds_tweets.list<-sapply(ds_tweets,function(x) x$getText())

Third, we need to turn our “ds_tweets.list” into a corpus.

ds_tweets.corpus<-Corpus(VectorSource(ds_tweets.list))  

Now we need to do a lot of cleaning of the text. In particular, we need to make all words lower case remove punctuation Get rid of funny characters (i.e. #,/, etc) remove stopwords (words that lack meaning such as “the”)

To do this we need to use a combination of functions in the “tm” package as well as some personally made functions

ds_tweets.corpus<-tm_map(ds_tweets.corpus,removePunctuation)
removeSpecialChars <- function(x) gsub("[^a-zA-Z0-9 ]","",x)#remove garbage terms
ds_tweets.corpus<-tm_map(ds_tweets.corpus,removeSpecialChars) #application of custom function
ds_tweets.corpus<-tm_map(ds_tweets.corpus,function(x) removeWords(x,stopwords())) #removes stop words
ds_tweets.corpus<-tm_map(ds_tweets.corpus,tolower)

Data Analysis

We can make a word cloud for fun now

wordcloud(ds_tweets.corpus)
1.png

We now need to convert our corpus to a matrix for further analysis. In addition, we need to remove sparse terms as this reduces the size of the matrix without losing much information. The value to set it to is at the discretion of the researcher. Below is the code

ds_tweets.tdm<-TermDocumentMatrix(ds_tweets.corpus)
ds_tweets.tdm<-removeSparseTerms(ds_tweets.tdm,sparse = .8)#remove sparse terms

We’ve looked at how to find the most frequent terms in another post. Below is the code for the 15 most common words

findFreqTerms(ds_tweets.tdm,15)
##  [1] "datasto"      "demonstrates" "download"     "executed"    
##  [5] "hard"         "key"          "leaka"        "locally"     
##  [9] "memory"       "mitchellvii"  "now"          "portable"    
## [13] "science"      "similarly"    "data"

Below are words that are highly correlated with the term “key”.

findAssocs(ds_tweets.tdm,'key',.95)
## $key
## demonstrates     download     executed        leaka      locally 
##         0.99         0.99         0.99         0.99         0.99 
##       memory      datasto         hard  mitchellvii     portable 
##         0.99         0.98         0.98         0.98         0.98 
##    similarly 
##         0.98

For the final trick, we will make a hierarchical agglomerative cluster. This will clump words that are more similar next to each other. We first need to convert our current “ds_tweets.tdm” into a regular matrix. Then we need to scale it because the distances need to be standardized. Below is the code.

ds_tweets.mat<-as.matrix(ds_tweets.tdm)
ds_tweets.mat.scale<-scale(ds_tweets.mat)

Now, we need to calculate the distance statistically

ds_tweets.dist<-dist(ds_tweets.mat.scale,method = 'euclidean')

At last, we can make the clusters,

ds_tweets.fit<-hclust(ds_tweets.dist,method = 'ward')
plot(ds_tweets.fit)

1

Looking at the chart, it appears we have six main clusters we can highlight them using the code below

plot(ds_tweets.fit)
groups<-cutree(ds_tweets.fit,k=6)
rect.hclust(ds_tweets.fit,k=6)

1.png

Conclusion

This post provided an example of how to pull data from twitter for text analysis. There are many steps but also some useful insights can be gained from this sort of research.

Diversity and Lexical Dispersion Analysis in R

In this post, we will learn how to conduct a diversity and lexical dispersion analysis in R. Diversity analysis is a measure of the breadth of an author’s vocabulary in a text. Are provides several calculations of this in their output

Lexical dispersion is used for knowing where or when certain words are used in a text. This is useful for identifying patterns if this is a goal of your data exploration.

We will conduct our two analysis by comparing two famous philosophical texts

  • Analects
  • The Prince

These books are available at the Gutenberg Project. You can go to the site type in the titles and download them to your computer.

We will use the “qdap” package in order to complete the sentiment analysis. Below is some initial code.

library(qdap)

Data Preparation

Below are the steps we need to take to prepare the data

  1. Paste the text files into R
  2. Convert the text files to ASCII format
  3. Convert the ASCII format to data frames
  4. Split the sentences in the data frame
  5. Add a variable that indicates the book name
  6. Combine the two books into one dataframe

We now need to prepare the three text. First, we move them into R using the “paste” function.

analects<-paste(scan(file ="C:/Users/darrin/Documents/R/R working directory/blog/blog/Text/Analects.txt",what='character'),collapse=" ")
prince<-paste(scan(file ="C:/Users/darrin/Documents/R/R working directory/blog/blog/Text/Prince.txt",what='character'),collapse=" ")

We must convert the text files to ASCII format see that R is able to interpret them.

analects<-iconv(analects,"latin1","ASCII","")
prince<-iconv(prince,"latin1","ASCII","")

For each book, we need to make a dataframe. The argument “texts” gives our dataframe one variable called “texts” which contains all the words in each book. Below is the code
data frame

analects<-data.frame(texts=analects)
prince<-data.frame(texts=prince)

With the dataframes completed. We can now split the variable “texts” in each dataframe by sentence. The “sentSplit” function will do this.

analects<-sentSplit(analects,'texts')
prince<-sentSplit(prince,'texts')

Next, we add the variable “book” to each dataframe. What this does is that for each row or sentence in the dataframe the “book” variable will tell you which book the sentence came from. This will be valuable for comparative purposes.

analects$book<-"analects"
prince$book<-"prince"

Now we combine the two books into one dataframe. The data preparation is now complete.

twobooks<-rbind(analects,prince)

Data Analysis

We will begin with the diversity analysis

div<-diversity(twobooks$texts,twobooks$book)
div
           book wc simpson shannon collision berger_parker brillouin
1 analects 30995    0.989   6.106   4.480     0.067         5.944
2 prince   52105    0.989   6.324   4.531     0.059         6.177

For most of the metrics, the diversity in the use of vocabulary is the same despite being different books from different eras in history. How these numbers are calculated is beyond the scope of this post.

Next, we will calculate the lexical dispersion of the two books. Will look at three common themes in history money, war, and marriage.

dispersion_plot(twobooks$texts,grouping.var=twobooks$book,c("money","war",'marriage'))

1

The tick marks show when each word appears. For example, money appears at the beginning of Analects only but is more spread out in tThe PRince. War is evenly dispersed in both books and marriage only appears in The Prince

Conclusion

This analysis showed additional tools that can be used to analyze text in R.

Readability and Formality Analysis in R

In this post, we will look at how to assess the readability and formality of a text using R. By readability, we mean the use of a formula that will provide us with the grade level at which the text is roughly written. This is highly useful information in the field of education and even medicine.

Formality provides insights into how the text relates to the reader. The more formal the writing the greater the distance between author and reader. Formal words are nouns, adjectives, prepositions, and articles while informal (contextual) words are pronouns, verbs, adverbs, and interjections.

The F-measure counts and calculates a score of formality based on the proportions of the formal and informal words.

We will conduct our two analysis by comparing two famous philosophical texts

  • Analects
  • The Prince

These books are available at the Gutenberg Project. You can go to the site type in the titles and download them to your computer.

We will use the “qdap” package in order to complete the sentiment analysis. Below is some initial code.

library(qdap)

Data Preparation

Below are the steps we need to take to prepare the data

  1. Paste the text files into R
  2. Convert the text files to ASCII format
  3. Convert the ASCII format to data frames
  4. Split the sentences in the data frame
  5. Add a variable that indicates the book name
  6. Combine the two books into one dataframe

We now need to prepare the two text. The “paste” function will move the text into the R environment.

analects<-paste(scan(file ="C:/Users/darrin/Documents/R/R working directory/blog/blog/Text/Analects.txt",what='character'),collapse=" ")
prince<-paste(scan(file ="C:/Users/darrin/Documents/R/R working directory/blog/blog/Text/Prince.txt",what='character'),collapse=" ")

The text need to be converted to the ASCII format and the code below does this.

analects<-iconv(analects,"latin1","ASCII","")
prince<-iconv(prince,"latin1","ASCII","")

For each book, we need to make a dataframe. The argument “texts” gives our dataframe one variable called “texts” which contains all the words in each book. Below is the code data frame

analects<-data.frame(texts=analects)
prince<-data.frame(texts=prince)

With the dataframes completed. We can now split the variable “texts” in each dataframe by sentence. The “sentSplit” function will do this.

analects<-sentSplit(analects,'texts')
prince<-sentSplit(prince,'texts')

Next, we add the variable “book” to each dataframe. What this does is that for each row or sentence in the dataframe the “book” variable will tell you which book the sentence came from. This will be useful for comparative purposes.

analects$book<-"analects"
prince$book<-"prince"

Lastly, we combine the two books into one dataframe. The data preparation is now complete.

twobooks<-rbind(analects,prince)

Data Analysis

We will begin with the readability. The “automated_readbility_index” function will calculate this for us.

ari<-automated_readability_index(twobooks$texts,twobooks$book)
ari
##       book word.count sentence.count character.count Automated_Readability_Index
## 1 analects      30995           3425          132981                       3.303
## 2   prince      52105           1542          236605                      16.853

Analects is written on a third-grade level but The Prince is written at grade 16. This is equivalent to a Senior in college. As such, The Prince is a challenging book to read.

Next we will calcualte the formality of the two books. The “formality” function is used for this.

form<-formality(twobooks$texts,twobooks$book)
form
##       book word.count formality
## 1   prince      52181     60.02
## 2 analects      31056     58.36

The books are mildly formal. The code below gives you the break down of the word use by percentage.

form$form.prop.by
##       book word.count  noun  adj  prep articles pronoun  verb adverb
## 1 analects      31056 25.05 8.63 14.23     8.49   10.84 22.92   5.86
## 2   prince      52181 21.51 9.89 18.42     7.59   10.69 20.74   5.94
##   interj other
## 1   0.05  3.93
## 2   0.00  5.24

The proportions are consistent when the two books are compared. Below is a visual of the table we just examined.

plot(form)

1.png

Conclusion

Readability and formality are additional text mining tools that can provide insights for Data Scientist. Both of these analysis tools can provide suggestions that may be needed in order to enhance communication or compare different authors and writing styles.

Sentiment Analysis in R

In this post, we will perform a sentiment analysis in R. Sentiment analysis involves employs the use of dictionaries to give each word in a sentence a score. A more positive word is given a higher positive number while a more negative word is given a more negative number. The score is then calculated based on the position of the word, the weight, as well as other more complex factors. This is then performed for the entire corpus to give it a score.

We will do a sentiment analysis in which we will compare three famous philosophical texts

  • Analects
  • The Prince
  • Penesees

These books are available at the Gutenberg Project. You can go to the site type in the titles and download them to your computer.

We will use the “qdap” package in order to complete the sentiment analysis. Below is some initial code.

library(qdap)

Data Preparation

Below are the steps we need to take to prepare the data

  1. Paste the text files into R
  2. Convert the text files to ASCII format
  3. Convert the ASCII format to data frames
  4. Split the sentences in the data frame
  5. Add a variable that indicates the book name
  6. Combine the three books into one dataframe

We now need to prepare the three text. First, we move them into R using the “paste” function.

analects<-paste(scan(file ="C:/Users/darrin/Documents/R/R working directory/blog/blog/Text/Analects.txt",what='character'),collapse=" ")
pensees<-paste(scan(file ="C:/Users/darrin/Documents/R/R working directory/blog/blog/Text/Pascal.txt",what='character'),collapse=" ")
prince<-paste(scan(file ="C:/Users/darrin/Documents/R/R working directory/blog/blog/Text/Prince.txt",what='character'),collapse=" ")

We need to convert the text files to ASCII format see that R is able to read them.

analects<-iconv(analects,"latin1","ASCII","")
pensees<-iconv(pensees,"latin1","ASCII","")
prince<-iconv(prince,"latin1","ASCII","")

Now we make our dataframe for each book. The argument “texts” gives our dataframe one variable called “texts” which contains all the words in each book. Below is the code data frame

analects<-data.frame(texts=analects)
pensees<-data.frame(texts=pensees)
prince<-data.frame(texts=prince)

With the dataframes completed. We can now split the variable “texts” in each dataframe by sentence. We will use the “sentSplit” function to do this.

analects<-sentSplit(analects,'texts')
pensees<-sentSplit(pensees,'texts')
prince<-sentSplit(prince,'texts')

Next, we add the variable “book” to each dataframe. What this does is that for each row or sentence in the dataframe the “book” variable will tell you which book the sentence came from. This will be valuable for comparative purposes.

analects$book<-"analects"
pensees$book<-"pensees"
prince$book<-"prince"

Now we combine all three books into one dataframe. The data preparation is now complete.

threebooks<-rbind(analects,pensees,prince)

Data Analysis

We are now ready to perform the actual sentiment analysis. We will use the “polarity” function for this. Inside the function, we need to use the text and the book variables. Below is the code. polarity analysis

pol<-polarity(threebooks$texts,threebooks$book)

We can see the results and a plot in the code below.

pol
##       book total.sentences total.words ave.polarity sd.polarity stan.mean.polarity
## 1 analects            3425       31383        0.076       0.254              0.299
## 2  pensees            7617      101043        0.008       0.278              0.028
## 3   prince            1542       52281        0.017       0.296              0.056

The table is mostly self-explanatory. We have the total number of sentences and words in the first two columns. Next is the average polarity and the standard deviation. Lastly, we have the standardized mean. The last column is commonly used for comparison purposes. As such, it appears that Analects is the most positive book by a large margin with Pensees and Prince be about the same and generally neutral.

plot(pol)

1.png

The top plot shows the polarity of each sentence over time or through the book. The bluer the more negative and the redder the more positive the sentence. The second plot shows the dispersion of the polarity.

There are many things to interpret from the second plot. For example, Pensees is more dispersed than the other two books in terms of polarity. The Prince is much less dispersed in comparison to the other books.

Another interesting task is to find the most negative and positive sentence. We need to take information from the “pol” dataframe and then use the “which.min” function to find the lowest scoring. The “which.min” function only gives the row. Therefore, we need to take this information and use it to find the actual sentence and the book. Below is the code.

pol.df<-pol$all #take polarity scores from pol.df
which.min(pol.df$polarity) #find the lowest scored sentence
## [1] 6343
pol.df$text.var[6343] #find the actual sentence
## [1] "Apart from Him there is but vice, misery, darkness, death, despair."
pol.df$book[6343] #find the actual book name
## [1] "pensees"

Pensees had the most negative sentence. You can see for yourself the clearly negative words which are vice, misery, darkness, death, and despair. We can repeat this for the most positive sentence

which.max(pol.df$polarity)
## [1] 4839
pol.df$text.var[4839]
## [1] "You will be faithful, honest, humble, grateful, generous, a sincere friend, truthful."
pol.df$book[4839]
## [1] "pensees"

Again Pensees has the most positive sentence with such words as faithful, honest, humble, grateful, generous, sincere, friend, truthful all being positive.

Conclusion

Sentiment analysis allows for the efficient analysis of a large body of text in a highly qualitative manner. There are weaknesses to this approach such as the dictionary used to classify the words can affect the results. In addition, Sentiment analysis only looks at individual sentences and not larger contextual circumstances such as a paragraph. As such, a sentiment analysis provides descriptive insights and not generalizations.

Visualizing Clustered Data in R

In this post, we will look at how to visualize multivariate clustered data. We will use the “Hitters” dataset from the “ISLR” package. We will use the features of the various baseball players as the dimensions for the clustering. Below is the initial code

library(ISLR);library(cluster)
data("Hitters")
str(Hitters)
## 'data.frame':    322 obs. of  20 variables:
##  $ AtBat    : int  293 315 479 496 321 594 185 298 323 401 ...
##  $ Hits     : int  66 81 130 141 87 169 37 73 81 92 ...
##  $ HmRun    : int  1 7 18 20 10 4 1 0 6 17 ...
##  $ Runs     : int  30 24 66 65 39 74 23 24 26 49 ...
##  $ RBI      : int  29 38 72 78 42 51 8 24 32 66 ...
##  $ Walks    : int  14 39 76 37 30 35 21 7 8 65 ...
##  $ Years    : int  1 14 3 11 2 11 2 3 2 13 ...
##  $ CAtBat   : int  293 3449 1624 5628 396 4408 214 509 341 5206 ...
##  $ CHits    : int  66 835 457 1575 101 1133 42 108 86 1332 ...
##  $ CHmRun   : int  1 69 63 225 12 19 1 0 6 253 ...
##  $ CRuns    : int  30 321 224 828 48 501 30 41 32 784 ...
##  $ CRBI     : int  29 414 266 838 46 336 9 37 34 890 ...
##  $ CWalks   : int  14 375 263 354 33 194 24 12 8 866 ...
##  $ League   : Factor w/ 2 levels "A","N": 1 2 1 2 2 1 2 1 2 1 ...
##  $ Division : Factor w/ 2 levels "E","W": 1 2 2 1 1 2 1 2 2 1 ...
##  $ PutOuts  : int  446 632 880 200 805 282 76 121 143 0 ...
##  $ Assists  : int  33 43 82 11 40 421 127 283 290 0 ...
##  $ Errors   : int  20 10 14 3 4 25 7 9 19 0 ...
##  $ Salary   : num  NA 475 480 500 91.5 750 70 100 75 1100 ...
##  $ NewLeague: Factor w/ 2 levels "A","N": 1 2 1 2 2 1 1 1 2 1 ...

Data Preparation

We need to remove all of the factor variables as the kmeans algorithm cannot support factor variables. In addition, we need to remove the “Salary” variable because it is missing data. Lastly, we need to scale the data because the scaling affects the results of the clustering. The code for all of this is below.

hittersScaled<-scale(Hitters[,c(-14,-15,-19,-20)])

Data Analysis

We will set the k for the kmeans to 3. This can be set to any number and it often requires domain knowledge to determine what is most appropriate. Below is the code

kHitters<-kmeans(hittersScaled,3)

We now look at some descriptive stats. First, we will see how many examples are in each cluster.

table(kHitters$cluster)
## 
##   1   2   3 
## 116 144  62

The groups are mostly balanced. Next, we will look at the mean of each feature by cluster. This will be done with the “aggregate” function. We will use the original data and make a list by the three clusters.

round(aggregate(Hitters[,c(-14,-15,-19,-20)],FUN=mean,by=list(kHitters$cluster)),1)
##   Group.1 AtBat  Hits HmRun Runs  RBI Walks Years CAtBat  CHits CHmRun
## 1       1 522.4 143.4  15.1 73.8 66.0  51.7   5.7 2179.1  597.2   51.3
## 2       2 256.6  64.5   5.5 30.9 28.6  24.3   5.6 1377.1  355.6   24.7
## 3       3 404.9 106.7  14.8 54.6 59.4  48.1  15.1 6480.7 1783.4  207.5
##   CRuns  CRBI CWalks PutOuts Assists Errors
## 1 299.2 256.1  199.7   380.2   181.8   11.7
## 2 170.1 143.6  122.2   209.0    62.4    5.8
## 3 908.5 901.8  694.0   303.7    70.3    6.4

Now we can see some difference. It seems group 3 are young (5.6 years of experience) starters based on the number of at-bats they get. Group 1 is young players who may not get to start due to the lower at-bats the receive. Group 2 is old (15.1 years) players who receive significant playing time and have but together impressive career statistics.

Now we will create our visual of the three clusters. For this, we use the “clusplot” function from the “cluster” package.

clusplot(hittersScaled,kHitters$cluster,color = T,shade = T,labels = 4)

1.png

In general, there is little overlap between the clusters. The overlap between groups 1 and 3 may be due to how they both have a similar amount of experience.

Conclusion

Visualizing the clusters can help with developing insights into the groups found during the analysis. This post provided one example of this.

Multidimensional Scale in R

In this post, we will explore multidimensional scaling (MDS) in R. The main benefit of MDS is that it allows you to plot multivariate data into two dimensions. This allows you to create visuals of complex models. In addition, the plotting of MDS allows you to see relationships among examples in a dataset based on how far or close they are to each other.

We will use the “College” dataset from the “ISLR” package to create an MDS of the colleges that are in the data set. Below is some initial code.

library(ISLR);library(ggplot2)
data("College")
str(College)
## 'data.frame':    777 obs. of  18 variables:
##  $ Private    : Factor w/ 2 levels "No","Yes": 2 2 2 2 2 2 2 2 2 2 ...
##  $ Apps       : num  1660 2186 1428 417 193 ...
##  $ Accept     : num  1232 1924 1097 349 146 ...
##  $ Enroll     : num  721 512 336 137 55 158 103 489 227 172 ...
##  $ Top10perc  : num  23 16 22 60 16 38 17 37 30 21 ...
##  $ Top25perc  : num  52 29 50 89 44 62 45 68 63 44 ...
##  $ F.Undergrad: num  2885 2683 1036 510 249 ...
##  $ P.Undergrad: num  537 1227 99 63 869 ...
##  $ Outstate   : num  7440 12280 11250 12960 7560 ...
##  $ Room.Board : num  3300 6450 3750 5450 4120 ...
##  $ Books      : num  450 750 400 450 800 500 500 450 300 660 ...
##  $ Personal   : num  2200 1500 1165 875 1500 ...
##  $ PhD        : num  70 29 53 92 76 67 90 89 79 40 ...
##  $ Terminal   : num  78 30 66 97 72 73 93 100 84 41 ...
##  $ S.F.Ratio  : num  18.1 12.2 12.9 7.7 11.9 9.4 11.5 13.7 11.3 11.5 ...
##  $ perc.alumni: num  12 16 30 37 2 11 26 37 23 15 ...
##  $ Expend     : num  7041 10527 8735 19016 10922 ...
##  $ Grad.Rate  : num  60 56 54 59 15 55 63 73 80 52 ...

Data Preparation

After using the “str” function we know that we need to remove the variable “Private” because it is a factor and type of MDS we are doing can only accommodate numerical variables. After removing this variable we will then make a matrix using the “as.matrix” function. Once the matrix is ready we can use the “cmdscale” function to create the actual two-dimensional MDS. Another point to mention is that for the sake of simplicity, we are only going to use the first ten colleges in the dataset. The reason being that using all 722 will m ake it hard to understand the plots we will make. Below is the code.

collegedata<-as.matrix(College[,-1])
collegemds<-cmdscale(dist(collegedata[1:10,]))

Data Analysis

We can now make our initial plot. The xlim and ylim arguments had to be played with a little for the plot to display properly. In addition, the “text” function was used to provide additional information such as the names of the colleges.

plot(collegemds,xlim=c(-15000,10000),ylim=c(-15000,10000))
text(collegemds[,1],collegemds[,2],rownames(collegemds))

1.png

From the plot, you can see that even with only ten names it is messy. The colleges are mostly clumped together which makes it difficult to interpret. We can plot this with a four quadrant graph using “ggplot2”. First, we need to convert the matrix that we create to a dataframe.

collegemdsdf<-as.data.frame(collegemds)

We are now ready to use “ggplot” to create the four quadrant plot.

p<-ggplot(collegemdsdf, aes(x=V1, y=V2)) +
        geom_point() +
        lims(x=c(-10000,8000),y=c(-4000,5000)) +
        theme_minimal() +
        coord_fixed() +  
        geom_vline(xintercept = 5) + geom_hline(yintercept = 5)+geom_text(aes(label=rownames(collegemdsdf)))
p+theme(panel.grid.major = element_blank(), panel.grid.minor = element_blank())

1

We set the horizontal and vertical line at the x and y-intercept respectively. By doing this it is much easier to understand and interpret the graph. Agnes Scott College is way off to the left while Alaska Pacific University, Abilene Christian College, and even Alderson-Broaddus College are clump together. The rest of the colleges are straddling below the x-axis.

Conclusion

In this example, we took several variables and condense them to two dimensions. This is the primary benefit of MDS. It allows you to visualize was cannot be visualized normally. The visualizing allows you to see the structure of the data from which you can draw inferences.

Topics Models in R

Topic models is a tool that can group text by their main themes. It involves the use of probability based on word frequencies. The algorithm that does this is called the Latent Dirichlet Allocation algorithm.

IN this post, we will use some text mining tools to analyze religious/philosophical text the five texts we will look at are The King James Bible The Quran The Book Of Mormon The Gospel of Buddha Meditations, by Marcus Aurelius

The link for access to these five text files is as follows https://www.kaggle.com/tentotheminus9/religious-and-philosophical-texts/downloads/religious-and-philosophical-texts.zip

Once you unzip it you will need to rename each file appropriately.

The next few paragraphs are almost verbatim from the post text mining in R. This is because the data preparation is essentially the same. Small changes were made but original material is found in the analysis section of this post.

We will now begin the actual analysis. The package we need or “tm” and “topicmodels” Below is some initial code.

library(tm);library(topicmodels)

Data Preparation

We need to do three things for each text file

  1. Paste it
  2. convert it
  3. write a table

Below is the code for pasting the text into R. Keep in mind that your code will be slightly different as the location of the file on your computer will be different. The “what” argument tells are what to take from the file and the “Collapse” argument deals with whitespace

bible<-paste(scan(file ="/home/darrin/Desktop/speech/bible.txt",what='character'),collapse=" ")
buddha<-paste(scan(file ="/home/darrin/Desktop/speech/buddha.txt",what='character'),collapse=" ")
meditations<-paste(scan(file ="/home/darrin/Desktop/speech/meditations.txt",what='character'),collapse=" ")
mormon<-paste(scan(file ="/home/darrin/Desktop/speech/mormon.txt",what='character'),collapse=" ")
quran<-paste(scan(file ="/home/darrin/Desktop/speech/quran.txt",what='character'),collapse=" ")

Now we need to convert the new objects we created to ASCII text. This removes a lot of “funny” characters from the objects. For this, we use the “iconv” function. Below is the code.

bible<-iconv(bible,"latin1","ASCII","")
meditations<-iconv(meditations,"latin1","ASCII","")
buddha<-iconv(buddha,"latin1","ASCII","")
mormon<-iconv(mormon,"latin1","ASCII","")
quran<-iconv(quran,"latin1","ASCII","")

The last step of the preparation is the creation of tables. What you are doing is you are taking the objects you have already created and are moving them to their own folder. The text files need to be alone in order to conduct the analysis. Below is the code.

write.table(bible,"/home/darrin/Documents/R working directory/textminingegw/mine/bible.txt")
write.table(meditations,"/home/darrin/Documents/R working directory/textminingegw/mine/meditations.txt")
write.table(buddha,"/home/darrin/Documents/R working directory/textminingegw/mine/buddha.txt")
write.table(mormon,"/home/darrin/Documents/R working directory/textminingegw/mine/mormon.txt")
write.table(quran,"/home/darrin/Documents/R working directory/textminingegw/mine/quran.txt")

Corpus Development

We are now ready to create the corpus. This is the object we use to clean the text together rather than individually as before. First, we need to make the corpus object, below is the code. Notice how it contains the directory where are tables are

docs<-Corpus(DirSource("/home/darrin/Documents/R working directory/textminingegw/mine"))

There are many different ways to prepare the corpus. For our example, we will do the following…

lower case all letters-This avoids the same word be counted separately (ie sheep and Sheep)

  • Remove numbers
  • Remove punctuation-Simplifies the document
  • Remove whitespace-Simplifies the document
  • Remove stopwords-Words that have a function but not a meaning (ie to, the, this, etc)
  • Remove custom words-Provides additional clarity

Below is the code for this

docs<-tm_map(docs,tolower)
docs<-tm_map(docs,removeNumbers)
docs<-tm_map(docs,removePunctuation)
docs<-tm_map(docs,removeWords,stopwords('english'))
docs<-tm_map(docs,stripWhitespace)
docs<-tm_map(docs,removeWords,c("chapter","also","no","thee","thy","hath","thou","thus","may",
                                "thee","even","yet","every","said","this","can","unto","upon",
                                "cant",'shall',"will","that","weve","dont","wont"))

We now need to create the matrix. The document matrix is what r will actually analyze. We will then remove sparse terms. Sparse terms are terms that do not occur are a certain percentage in the matrix. For our purposes, we will set the sparsity to .60. This means that a word must appear in 3 of the 5 books of our analysis. Below is the code. The ‘dim’ function will allow you to see how the number of terms is reduced drastically. This is done without losing a great deal of data will speeding up computational time.

dtm<-DocumentTermMatrix(docs)
dim(dtm)
## [1]     5 24368
dtm<-removeSparseTerms(dtm,0.6)
dim(dtm)
## [1]    5 5265

Analysis

We will now create our topics or themes. If there is no a priori information on how many topics to make it os up to you to decide how many. We will create three topics. The “LDA” function is used and the argument “k” is set to three indicating we want three topics. Below is the code

set.seed(123)
lda3<-LDA(dtm,k=3)

We can see which topic each book was assigned to using the “topics” function. Below is the code.

topics(lda3)
##       bible.txt      buddha.txt meditations.txt      mormon.txt 
##               2               3               3               1 
##       quran.txt 
##               3

According to the results. The book of Mormon and the Bible were so unique that they each had their own topic (1 and 3). The other three text (Buddha, Meditations, and the Book of Mormon) were all placed in topic 2. It’s surprising that the Bible and the Book of Mormon were in separate topics since they are both Christian text. It is also surprising the Book by Buddha, Meditations, and the Quran are all under the same topic as it seems that these texts have nothing in common.

We can also use the “terms” function to see what the most common words are for each topic. The first argument in the function is the model name followed by the number of words you want to see. We will look at 10 words per topic.

terms(lda3, 10)
##       Topic 1  Topic 2  Topic 3 
##  [1,] "people" "lord"   "god"   
##  [2,] "came"   "god"    "one"   
##  [3,] "god"    "israel" "things"
##  [4,] "behold" "man"    "say"   
##  [5,] "pass"   "son"    "truth" 
##  [6,] "lord"   "king"   "man"   
##  [7,] "yea"    "house"  "lord"  
##  [8,] "land"   "one"    "life"  
##  [9,] "now"    "come"   "see"   
## [10,] "things" "people" "good"

Interpreting these results takes qualitative skills and is subjective. They all seem to be talking about the same thing. Topic 3 (Bible) seems to focus on Israel and Lord while topic 1 (Mormon) is about God and people. Topic 2 (Buddha, Meditations, and Quran) speak of god as well but the emphasis has moved to truth and the word one.

Conclusion

This post provided insight into developing topic models using R. The results of a topic model analysis is highly subjective and will often require strong domain knowledge. Furthermore, the number of topics is highly flexible as well and in the example in this post we could have had different numbers of topics for comparative purposes.

Text Mining in R

Text mining is descriptive analysis tool that is applied to unstructured textual data. By unstructured, it is meant data that is not stored in relational databases. The majority of data on the Internet and the business world, in general, is of an unstructured nature. As such, the use of text mining tools has grown in importance over the past two decades.

In this post, we will use some text mining tools to analyze religious/philosophical text the five texts we will look at are

  • The King James Bible
  • The Quran
  • The Book Of Mormon
  • The Gospel of Buddha
  • Meditations, by Marcus Aurelius

The link for access to these five text files is as follows

https://www.kaggle.com/tentotheminus9/religious-and-philosophical-texts/downloads/religious-and-philosophical-texts.zip

Once you unzip it you will need to rename each file appropriately.

The actual process of text mining is rather simple and does not involve a great deal of complex coding compared to other machine learning applications. Primarily you need to do the follow Prep the data by first scanning it into r, converting it to ASCII format, and creating the write table for each text Create a corpus that is then cleaned of unnecessary characters Conduct the actual descriptive analysis

We will now begin the actual analysis. The package we need or “tm” for text mining, “wordcloud”, and “RColorBrewer” for visuals. Below is some initial code.

library(tm);library(wordcloud);library(RColorBrewer)

Data Preparation

We need to do three things for each text file

  • Paste
  •  convert it
  • write a table

Below is the code for pasting the text into R. Keep in mind that your code will be slightly different as the location of the file on your computer will be different. The “what” argument tells are what to take from the file and the “Collapse” argument deals with whitespace

bible<-paste(scan(file ="/home/darrin/Desktop/speech/bible.txt",what='character'),collapse=" ")
buddha<-paste(scan(file ="/home/darrin/Desktop/speech/buddha.txt",what='character'),collapse=" ")
meditations<-paste(scan(file ="/home/darrin/Desktop/speech/meditations.txt",what='character'),collapse=" ")
mormon<-paste(scan(file ="/home/darrin/Desktop/speech/mormon.txt",what='character'),collapse=" ")
quran<-paste(scan(file ="/home/darrin/Desktop/speech/quran.txt",what='character'),collapse=" ")

Now we need to convert the new objects we created to ASCII text. This removes a lot of “funny” characters from the objects. For this, we use the “iconv” function. Below is the code.

bible<-iconv(bible,"latin1","ASCII","")
meditations<-iconv(meditations,"latin1","ASCII","")
buddha<-iconv(buddha,"latin1","ASCII","")
mormon<-iconv(mormon,"latin1","ASCII","")
quran<-iconv(quran,"latin1","ASCII","")

The last step of the preparation is the creation of tables. Primarily you are taken the objects you have already created and moved them to their own folder. The text files need to be alone in order to conduct the analysis. Below is the code.

write.table(bible,"/home/darrin/Documents/R working directory/textminingegw/mine/bible.txt")
write.table(meditations,"/home/darrin/Documents/R working directory/textminingegw/mine/meditations.txt")
write.table(buddha,"/home/darrin/Documents/R working directory/textminingegw/mine/buddha.txt")
write.table(mormon,"/home/darrin/Documents/R working directory/textminingegw/mine/mormon.txt")
write.table(quran,"/home/darrin/Documents/R working directory/textminingegw/mine/quran.txt")

For fun, you can see a snippet of each object by simply typing its name into r as shown below.

bible
##[1] "x 1 The Project Gutenberg EBook of The King James Bible This eBook is for the use of anyone anywhere at no cost and with almost no restrictions whatsoever. You may copy it, give it away or re-use it under the terms of the Project Gutenberg License included with this eBook or online at www.gutenberg.org Title: The King James Bible Release Date: March 2, 2011 [EBook #10] [This King James Bible was orginally posted by Project Gutenberg in late 1989] Language: English *** START OF THIS PROJECT

Corpus Creation

We are now ready to create the corpus. This is the object we use to clean the text together rather than individually as before. First, we need to make the corpus object, below is the code. Notice how it contains the directory where are tables are

docs<-Corpus(DirSource("/home/darrin/Documents/R working directory/textminingegw/mine"))

There are many different ways to prepare the corpus. For our example, we will do the following…

  • lower case all letters-This avoids the same word be counted separately (ie sheep and Sheep)
  • Remove numbers
  • Remove punctuation-Simplifies the document Remove whitespace-Simplifies the document
  • Remove stopwords-Words that have a function but not a meaning (ie to, the, this, etc)
  • Remove custom words-Provides additional clarity

Below is the code for this

docs<-tm_map(docs,tolower)
docs<-tm_map(docs,removeNumbers)
docs<-tm_map(docs,removePunctuation)
docs<-tm_map(docs,removeWords,stopwords('english'))
docs<-tm_map(docs,stripWhitespace)
#docs<-tm_map(docs,stemDocument)
docs<-tm_map(docs,removeWords,c("chapter","also","no","thee","thy","hath","thou","thus","may",
                                "thee","even","yet","every","said","this","can","unto","upon",
                                "cant",'shall',"will","that","weve","dont","wont"))

We now need to create the matrix. The document matrix is what r will actually analyze. We will then remove sparse terms. Sparse terms are terms that do not occur are a certain percentage in the matrix. For our purposes, we will set the sparsity to .60. This means that a word must appear in 3 of the 5 books of our analysis. Below is the code. The ‘dim’ function will allow you to see how the number of terms is reduced drastically. This is done without losing a great deal of data will speeding up computational time.

dtm<-DocumentTermMatrix(docs)
dim(dtm)
## [1]     5 24368
dtm<-removeSparseTerms(dtm,0.6)
dim(dtm)
## [1]    5 5265

Analysis

We now can explore the text. First, we need to make a matrix that has the sum of the columns od the document term matrix. Then we need to change the order of the matrix to have the most frequent terms first. Below is the code for this.

freq<-colSums(as.matrix(dtm))
ord<-order(-freq)#changes the order to descending

We can now make a simple bar plot to see what the most common words are. Below is the code

barplot(freq[head(ord)])

1

As expected with religious text. The most common terms are religious terms. You can also determine what words appeared least often with the code below.

freq[tail(ord)]
##   posting   secured    smiled      sway swiftness worthless 
##         3         3         3         3         3         3

Notice how each word appeared 3 times. This may mean that the 3 terms appear once in three of the five books. Remember we set the sparsity to .60 or 3/5.

Another analysis is to determine how many words appear a certain number of times. For example, how many words appear 200 times or 300. Below is the code.

head(table(freq))
## freq
##   3   4   5   6   7   8 
## 117 230 172 192 191 187

Using the “head” function and the “table” function gives us the six most common values of word frequencies. Three words appear 117 times, four appear 230 times, etc. Remember the “head” gives the first few values regardless of their amount

The “findFreqTerms” function allows you to set a cutoff point of how frequent a word needs to be. For example, if we want to know how many words appeared 3000 times we would use the following code.

findFreqTerms(dtm,3000)
##  [1] "behold" "came"   "come"   "god"    "land"   "lord"   "man"   
##  [8] "now"    "one"    "people"

The “findAssocs” function finds the correlation between two words in the text. This provides insight into how frequently these words appear together. For our example, we will see which words are associated with war, which is a common subject in many religious texts. We will set the correlation high to keep the list short for the blog post. Below is the code

findAssocs(dtm,"war",corlimit =.998) 
## $war
##     arrows      bands   buildeth    captive      cords     making 
##          1          1          1          1          1          1 
##  perisheth prosperity      tower      wages      yield 
##          1          1          1          1          1

The interpretation of the results can take many forms. It makes sense for ‘arrows’ and ‘captives’ to be associated with ‘war’ but ‘yield’ seems confusing. We also do not know the sample size of the associations.

Our last technique is the development of a word cloud. This allows you to see word frequency based on where the word is located in the cloud as well as its size. For our example, we will set it so that a word must appear at least 1000 times in the corpus with more common words in the middle. Below is the code.

wordcloud(names(freq),freq,min.freq=1000,scale=c(3,.5),colors=brewer.pal(6,"Dark2"),random.color = F,random.order = F)

1.png

Conclusion

This post provided an introduction to text mining in R. There are many more complex features that are available for the more serious user of R than what is described here

Binary Recommendation Engines in R

In this post, we will look at recommendation engines using binary information. For a binary recommendation engine, it requires that the data rates the product as good/bad or some other system in which only two responses are possible. The “recommendarlab” package is needed for this analysis and we will use the ratings of movies from grouplens.org for this post.

url http://grouplens.org/datasets/movielens/latest/

If you follow along you want to download the “small dataset” and use the “ratings.csv” and the “movies.csv”. We will then merge these two datasets based on the variable “movieId” the url is below is the initial code

library(recommenderlab) ratings <- read.csv("~/Downloads/ml-latest-small/ratings.csv")#load ratings data
movies <- read.csv("~/Downloads/ml-latest-small/movies.csv")#load movies data
movieRatings<-merge(ratings, movies, by='movieId')#merge movies and ratings data

We now need to convert are “movieRatings” data frame to a matrix that the “recommendarlab” can use. After doing this we need to indicate that we are doing a binary engine by setting the minimum rating to 2.5. What this means is that anything above 2.5 is in one category and anything below 2.5 is in a different category. We use the “binarize” function to do this. Below is the code

movieRatings<-as(movieRatings,"realRatingMatrix")
movie.bin<-binarize(movieRatings,minRating=2.5)

We need to use a subset of our data. We need each row to have a certain minimum number of ratings. For this analysis, we need at least ten ratings per row. Below is the code for this.

movie.bin<-movie.bin[rowCounts(movie.bin)>10]
movie.bin
## 1817 x 671 rating matrix of class 'binaryRatingMatrix' with 68643 ratings.

Next, we need to setup the evaluation scheme. We use the function and plug in the data, method of evaluation, number of folds, and the given number of ratings. The code is as follows.

set.seed(456)
e.bin<-evaluationScheme(movie.bin,method='cross-validation',k=5,given=10)

We now make a list that holds all the models we want to run. We will run four models “popular”, “random”, “ubcf”, and “ibcf”. We will then use the “evaluate” function to see how accurate are models are for 5,10,15, and 20 items.

algorithms.bin<-list(POPULAR=list(name="POPULAR",param=NULL),
                     RAND=list(name="RANDOM"),UBCF=list(name="UBCF"),IBCF=list(name="IBCF")) 
results.bin<-evaluate(e.bin,algorithms.bin,n=c(5,10,15,20))

The “avg” function will help us to see how are models did. Below are the results

avg(results.bin)
## $POPULAR
##          TP        FP       FN       TN precision     recall        TPR
## 5  1.518356  3.481644 26.16877 629.8312 0.3036712 0.09293487 0.09293487
## 10 2.792329  7.207671 24.89479 626.1052 0.2792329 0.15074799 0.15074799
## 15 3.916164 11.083836 23.77096 622.2290 0.2610776 0.20512093 0.20512093
## 20 4.861370 15.138630 22.82575 618.1742 0.2430685 0.24831787 0.24831787
##            FPR
## 5  0.005426716
## 10 0.011221837
## 15 0.017266489
## 20 0.023608749
## 
## $RAND
##           TP        FP       FN       TN  precision      recall
## 5  0.2120548  4.787945 27.47507 628.5249 0.04241096 0.007530989
## 10 0.4104110  9.589589 27.27671 623.7233 0.04104110 0.015611349
## 15 0.6241096 14.375890 27.06301 618.9370 0.04160731 0.023631305
## 20 0.8460274 19.153973 26.84110 614.1589 0.04230137 0.033130430
##            TPR         FPR
## 5  0.007530989 0.007559594
## 10 0.015611349 0.015146399
## 15 0.023631305 0.022702057
## 20 0.033130430 0.030246522
## 
## $UBCF
##          TP        FP       FN       TN precision    recall       TPR
## 5  2.175890  2.824110 25.51123 630.4888 0.4351781 0.1582319 0.1582319
## 10 3.740274  6.259726 23.94685 627.0532 0.3740274 0.2504990 0.2504990
## 15 5.054795  9.945205 22.63233 623.3677 0.3369863 0.3182356 0.3182356
## 20 6.172603 13.827397 21.51452 619.4855 0.3086301 0.3748969 0.3748969
##            FPR
## 5  0.004387006
## 10 0.009740306
## 15 0.015492088
## 20 0.021557381
## 
## $IBCF
##          TP        FP       FN       TN precision     recall        TPR
## 5  1.330411  3.669589 26.35671 629.6433 0.2660822 0.08190126 0.08190126
## 10 2.442192  7.557808 25.24493 625.7551 0.2442192 0.13786523 0.13786523
## 15 3.532603 11.467397 24.15452 621.8455 0.2355068 0.19010813 0.19010813
## 20 4.546301 15.453699 23.14082 617.8592 0.2273151 0.23494969 0.23494969
##            FPR
## 5  0.005727386
## 10 0.011801682
## 15 0.017900255
## 20 0.024124329

The results are pretty bad for all models. The TPR (true positive rate) is always below .4. We can make a visual of the results by creating a ROC using the TPR/FPR as well as precision/recall.

plot(results.bin,legend="topleft",annotate=T)

1.png

plot(results.bin,"prec",legend="topleft",annotate=T)

1.png

The visual makes it clear that the UBCF model is the best.

Conclusion

This post provided an example of the development of an algorithm for binary recommendations.

Recommendation Engines in R

In this post, we will look at how to make a recommendation engine. We will use data that makes recommendations about movies. We will use the “recommenderlab” package to build several different engines. The data comes from

http://grouplens.org/datasets/movielens/latest/

At this link, you need to download the “ml-latest.zip”. From there, we will use the “ratings” and “movies” files in this post. Ratings provide the ratings of the movies while movies provide the names of the movies. Before going further it is important to know that the “recommenderlab” has five different techniques for developing recommendation engines (IBCF, UBCF, POPULAR, RANDOM, & SVD). We will use all of them for comparative purposes Below is the code for getting started.

library(recommenderlab)
ratings <- read.csv("~/Downloads/ml-latest-small/ratings.csv")
movies <- read.csv("~/Downloads/ml-latest-small/movies.csv")

We now need to merge the two datasets so that they become one. This way the titles and ratings are in one place. We will then coerce our “movieRatings” dataframe into a “realRatingMatrix” in order to continue our analysis. Below is the code

movieRatings<-merge(ratings, movies, by='movieId') #merge two files
movieRatings<-as(movieRatings,"realRatingMatrix") #coerce to realRatingMatrix

We will now create two histograms of the ratings. The first is raw data and the second will be normalized data. The function “getRatings” is used in combination with the “hist” function to make the histogram. The normalized data includes the “normalize” function. Below is the code.

hist(getRatings(movieRatings),breaks =10)

1.png

hist(getRatings(normalize(movieRatings)),breaks =10)

1.png

We are now ready to create the evaluation scheme for our analysis. In this object we need to set the data name (movieRatings), the method we want to use (cross-validation), the amount of data we want to use for the training set (80%), how many ratings the algorithm is given during the test set (1) with the rest being used to compute the error. We also need to tell R what a good rating is (4 or higher) and the number of folds for the cross-validation (10). Below is the code for all of this.

set.seed(123)
eSetup<-evaluationScheme(movieRatings,method='cross-validation',train=.8,given=1,goodRating=4,k=10)

Below is the code for developing our models. To do this we need to use the “Recommender” function and the “getData” function to get the dataset. Remember we are using all six modeling techniques

ubcf<-Recommender(getData(eSetup,"train"),"UBCF")
ibcf<-Recommender(getData(eSetup,"train"),"IBCF")
svd<-Recommender(getData(eSetup,"train"),"svd")
popular<-Recommender(getData(eSetup,"train"),"POPULAR")
random<-Recommender(getData(eSetup,"train"),"RANDOM")

The models have been created. We can now make our predictions using the “predict” function in addition to the “getData” function. We also need to set the argument “type” to “ratings”. Below is the code.

ubcf_pred<-predict(ubcf,getData(eSetup,"known"),type="ratings")
ibcf_pred<-predict(ibcf,getData(eSetup,"known"),type="ratings")
svd_pred<-predict(svd,getData(eSetup,"known"),type="ratings")
pop_pred<-predict(popular,getData(eSetup,"known"),type="ratings")
rand_pred<-predict(random,getData(eSetup,"known"),type="ratings")

We can now look at the accuracy of the models. We will do this in two steps. First, we will look at the error rates. After completing this, we will do a more detailed analysis of the stronger models. Below is the code for the first step

ubcf_error<-calcPredictionAccuracy(ubcf_pred,getData(eSetup,"unknown")) #calculate error
ibcf_error<-calcPredictionAccuracy(ibcf_pred,getData(eSetup,"unknown"))
svd_error<-calcPredictionAccuracy(svd_pred,getData(eSetup,"unknown"))
pop_error<-calcPredictionAccuracy(pop_pred,getData(eSetup,"unknown"))
rand_error<-calcPredictionAccuracy(rand_pred,getData(eSetup,"unknown"))
error<-rbind(ubcf_error,ibcf_error,svd_error,pop_error,rand_error) #combine objects into one data frame
rownames(error)<-c("UBCF","IBCF","SVD","POP","RAND") #give names to rows
error
##          RMSE      MSE       MAE
## UBCF 1.278074 1.633473 0.9680428
## IBCF 1.484129 2.202640 1.1049733
## SVD  1.277550 1.632135 0.9679505
## POP  1.224838 1.500228 0.9255929
## RAND 1.455207 2.117628 1.1354987

The results indicate that the “RAND” and “IBCF” models are clearly worse than the remaining three. We will now move to the second step and take a closer look at the “UBCF”, “SVD”, and “POP” models. We will do this by making a list and using the “evaluate” function to get other model evaluation metrics. We will make a list called “algorithms” and store the three strongest models. Then we will make an objectcalled “evlist” in this object we will use the “evaluate” function as well as called the evaluation scheme “esetup”, the list (“algorithms”) as well as the number of movies to assess (5,10,15,20)

algorithms<-list(POPULAR=list(name="POPULAR"),SVD=list(name="SVD"),UBCF=list(name="UBCF"))
evlist<-evaluate(eSetup,algorithms,n=c(5,10,15,20))
avg(evlist)
## $POPULAR
##           TP        FP       FN       TN  precision     recall        TPR
## 5  0.3010965  3.033333 4.917105 661.7485 0.09028443 0.07670381 0.07670381
## 10 0.4539474  6.214912 4.764254 658.5669 0.06806016 0.11289681 0.11289681
## 15 0.5953947  9.407895 4.622807 655.3739 0.05950450 0.14080354 0.14080354
## 20 0.6839912 12.653728 4.534211 652.1281 0.05127635 0.16024740 0.16024740
##            FPR
## 5  0.004566269
## 10 0.009363021
## 15 0.014177091
## 20 0.019075070
## 
## $SVD
##           TP        FP       FN       TN  precision     recall        TPR
## 5  0.1025219  3.231908 5.115680 661.5499 0.03077788 0.00968336 0.00968336
## 10 0.1808114  6.488048 5.037390 658.2938 0.02713505 0.01625454 0.01625454
## 15 0.2619518  9.741338 4.956250 655.0405 0.02620515 0.02716656 0.02716656
## 20 0.3313596 13.006360 4.886842 651.7754 0.02486232 0.03698768 0.03698768
##            FPR
## 5  0.004871678
## 10 0.009782266
## 15 0.014689510
## 20 0.019615377
## 
## $UBCF
##           TP        FP       FN       TN  precision     recall        TPR
## 5  0.1210526  2.968860 5.097149 661.8129 0.03916652 0.01481106 0.01481106
## 10 0.2075658  5.972259 5.010636 658.8095 0.03357173 0.02352752 0.02352752
## 15 0.3028509  8.966886 4.915351 655.8149 0.03266321 0.03720717 0.03720717
## 20 0.3813596 11.978289 4.836842 652.8035 0.03085246 0.04784538 0.04784538
##            FPR
## 5  0.004475151
## 10 0.009004466
## 15 0.013520481
## 20 0.018063361

Well, the numbers indicate that all the models are terrible. All metrics are scored rather poorly. True positives, false positives, false negatives, true negatives, precision, recall, true positive rate, and false positive rate are low for all models. Remember that these values are averages of the cross-validation. As such, for the “POPULAR” model when looking at the top five movies on average, the number of true positives was .3.

Even though the numbers are terrible the “POPULAR” model always performed the best. We can even view the ROC curve with the code below

plot(evlist,legend="topleft",annotate=T)

1.png

We can now determine individual recommendations. We first need to build a model using the POPULAR algorithm. Below is the code.

Rec1<-Recommender(movieRatings,method="POPULAR")
Rec1
## Recommender of type 'POPULAR' for 'realRatingMatrix' 
## learned using 9066 users.

We will now pull the top five recommendations for the first two raters and make a list. The numbers are the movie ids and not the actual titles

recommend<-predict(Rec1,movieRatings[1:5],n=5)
as(recommend,"list")
## $`1`
## [1] "78"  "95"  "544" "102" "4"  
## 
## $`2`
## [1] "242" "232" "294" "577" "95" 
## 
## $`3`
## [1] "654" "242" "30"  "232" "287"
## 
## $`4`
## [1] "564" "654" "242" "30"  "232"
## 
## $`5`
## [1] "242" "30"  "232" "287" "577"

Below we can see the specific score for a specific movie. The names of the movies come from the original “ratings” dataset.

rating<-predict(Rec1,movieRatings[1:5],type='ratings')
rating
## 5 x 671 rating matrix of class 'realRatingMatrix' with 2873 ratings.
movieresult<-as(rating,'matrix')[1:5,1:3]
colnames(movieresult)<-c("Toy Story","Jumanji","Grumpier Old Men")
movieresult
##   Toy Story  Jumanji Grumpier Old Men
## 1  2.859941 3.822666         3.724566
## 2  2.389340 3.352066         3.253965
## 3  2.148488 3.111213         3.013113
## 4  1.372087 2.334812         2.236711
## 5  2.255328 3.218054         3.119953

This is what the model thinks the person would rate the movie. It is the difference between this number and the actual one that the error is calculated. In addition, if someone did not rate a movie you would see an NA in that spot

Conclusion

This was a lot of work. However, with additional work, you can have your own recommendation system based on data that was collected.

Understanding Recommendation Engines

Recommendations engines are used to make predictions about what future users would like based on prior users suggestions. Whenever you provide numerical feedback on a product or services this information can be used to provide recommendations in the future.

This post will look at various ways in which recommendation engines derive their conclusions.

Ways of Recommending

There are two common ways to develop a recommendation engine in a machine learning context. These two ways are collaborative filtering and content-based. Content-based recommendations rely solely on the data provided by the user. A user develops a profile through their activity and the engine recommends products or services. The only problem is if there is little data on user poor recommendations are made.

Collaborative filtering is crowd-based recommendations. What this means the data of many is used to recommend to one. This bypasses the concern with a lack of data that can happen with content-based recommendations.

There are four common ways to develop collaborative filters and they are as follows

  • User-based collaborative filtering
  • Item-baed collaborative filtering
  • Singular value decomposition and Principal component  analysis

User-based Collaborative Filtering (UBCF)

UBCF uses k-nearest neighbor or some similarity measurement such as Pearson Correlation to predict the missing rating for a user. Once the number of neighbors is determined the algorithm calculates the average of the neighbors to predict the information for the user. The predicted value can be used to determine if a user will like a particular product or service

The predicted value can be used to determine if a user will like a particular product or service. Low values are not recommended while high values may be. A major weakness of UBCF is calculating the similarities of users requires keeping all the data in memory which is a computational challenge.

Item-based Collaborative Filtering (IBCF)

IBCF uses the similarity between items to make recomeendations. This is calculated with the same measures as before (Knn, Pearson correlation, etc.). After finding the most similar items, The algorithm will take the average from the individual user of the other items to predict recommendation the user would make for the unknown item.

In order to assure accuracy, it is necessary to have a huge number of items that can have the similarities calculated. This leads to the same computational problems mentioned earlier.

Singular Value Decomposition and Principal Component Analysis (SVD, PCA)

When the dataset is too big for the first two options. SVD or PCA could be an appropriate choice. What each of these two methods does in a simple way is reduce the dimensionality by making latent variables. Doing this reduces the computational effort as well as reduce noise in the data.

With SVD, we can reduce the data to a handful of factors. The remaining factors can be used to reproduce the original values which can then be used to predict missing values.

For PCA, items are combined in components and like items that load on the same component can be used to make predictions for an unknown data point for a user.

Conclusion

Recommendation engines play a critical part in generating sales for many companies. This post provided an insight into how they are created. Understanding this can allow you to develop recommendation engines based on data.

Clustering Mixed Data in R

One of the major problems with hierarchical and k-means clustering is that they cannot handle nominal data. The reality is that most data is mixed or a combination of both interval/ratio data and nominal/ordinal data.

One of many ways to deal with this problem is by using the Gower coefficient. This coefficient compares the pairwise cases in the data set and calculates a dissimilarity between. By dissimilar we mean the weighted mean of the variables in that row.

Once the dissimilarity calculations are completed using the gower coefficient (there are naturally other choices), you can then use regular kmeans clustering (there are also other choices) to find the traits of the various clusters. In this post, we will use the “MedExp” dataset from the “Ecdat” package. Our goal will be to cluster the mixed data into four clusters. Below is some initial code.

library(cluster);library(Ecdat);library(compareGroups)
data("MedExp")
str(MedExp)
## 'data.frame':    5574 obs. of  15 variables:
##  $ med     : num  62.1 0 27.8 290.6 0 ...
##  $ lc      : num  0 0 0 0 0 0 0 0 0 0 ...
##  $ idp     : Factor w/ 2 levels "no","yes": 2 2 2 2 2 2 2 2 1 1 ...
##  $ lpi     : num  6.91 6.91 6.91 6.91 6.11 ...
##  $ fmde    : num  0 0 0 0 0 0 0 0 0 0 ...
##  $ physlim : Factor w/ 2 levels "no","yes": 1 1 1 1 1 2 1 1 1 1 ...
##  $ ndisease: num  13.7 13.7 13.7 13.7 13.7 ...
##  $ health  : Factor w/ 4 levels "excellent","good",..: 2 1 1 2 2 2 2 1 2 2 ...
##  $ linc    : num  9.53 9.53 9.53 9.53 8.54 ...
##  $ lfam    : num  1.39 1.39 1.39 1.39 1.1 ...
##  $ educdec : num  12 12 12 12 12 12 12 12 9 9 ...
##  $ age     : num  43.9 17.6 15.5 44.1 14.5 ...
##  $ sex     : Factor w/ 2 levels "male","female": 1 1 2 2 2 2 2 1 2 2 ...
##  $ child   : Factor w/ 2 levels "no","yes": 1 2 2 1 2 2 1 1 2 1 ...
##  $ black   : Factor w/ 2 levels "yes","no": 2 2 2 2 2 2 2 2 2 2 ...

You can clearly see that our data is mixed with both numerical and factor variables. Therefore, the first thing we must do is calculate the gower coefficient for the dataset. This is done with the “daisy” function from the “cluster” package.

disMat<-daisy(MedExp,metric = "gower")

Now we can use the “kmeans” to make are clusters. This is possible because all the factor variables have been converted to a numerical value. We will set the number of clusters to 4. Below is the code.

set.seed(123)
mixedClusters<-kmeans(disMat, centers=4)

We can now look at a table of the clusters

table(mixedClusters$cluster)
## 
##    1    2    3    4 
## 1960 1342 1356  916

The groups seem reasonably balanced. We now need to add the results of the kmeans to the original dataset. Below is the code

MedExp$cluster<-mixedClusters$cluster

We now can built a descriptive table that will give us the proportions of each variable in each cluster. To do this we need to use the “compareGroups” function. We will then take the output of the “compareGroups” function and use it in the “createTable” function to get are actual descriptive stats.

group<-compareGroups(cluster~.,data=MedExp)
clustab<-createTable(group)
clustab
## 
## --------Summary descriptives table by 'cluster'---------
## 
## __________________________________________________________________________ 
##                    1            2            3            4      p.overall 
##                  N=1960       N=1342       N=1356       N=916              
## ¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯ 
## med            211 (1119)   68.2 (333)   269 (820)   83.8 (210)   <0.001   
## lc            4.07 (0.60)  4.05 (0.60)  0.04 (0.39)  0.03 (0.34)   0.000   
## idp:                                                              <0.001   
##     no        1289 (65.8%) 922 (68.7%)  1123 (82.8%) 781 (85.3%)           
##     yes       671 (34.2%)  420 (31.3%)  233 (17.2%)  135 (14.7%)           
## lpi           5.72 (1.94)  5.90 (1.73)  3.27 (2.91)  3.05 (2.96)  <0.001   
## fmde          6.82 (0.99)  6.93 (0.90)  0.00 (0.12)  0.00 (0.00)   0.000   
## physlim:                                                          <0.001   
##     no        1609 (82.1%) 1163 (86.7%) 1096 (80.8%) 789 (86.1%)           
##     yes       351 (17.9%)  179 (13.3%)  260 (19.2%)  127 (13.9%)           
## ndisease      11.5 (8.26)  10.2 (2.97)  12.2 (8.50)  10.6 (3.35)  <0.001   
## health:                                                           <0.001   
##     excellent 910 (46.4%)  880 (65.6%)  615 (45.4%)  612 (66.8%)           
##     good      828 (42.2%)  382 (28.5%)  563 (41.5%)  261 (28.5%)           
##     fair      183 (9.34%)   74 (5.51%)  137 (10.1%)  42 (4.59%)            
##     poor       39 (1.99%)   6 (0.45%)    41 (3.02%)   1 (0.11%)            
## linc          8.68 (1.22)  8.61 (1.37)  8.75 (1.17)  8.78 (1.06)   0.005   
## lfam          1.05 (0.57)  1.49 (0.34)  1.08 (0.58)  1.52 (0.35)  <0.001   
## educdec       12.1 (2.87)  11.8 (2.58)  12.0 (3.08)  11.8 (2.73)   0.005   
## age           36.5 (12.0)  9.26 (5.01)  37.0 (12.5)  9.29 (5.11)   0.000   
## sex:                                                              <0.001   
##     male      893 (45.6%)  686 (51.1%)  623 (45.9%)  482 (52.6%)           
##     female    1067 (54.4%) 656 (48.9%)  733 (54.1%)  434 (47.4%)           
## child:                                                             0.000   
##     no        1960 (100%)   0 (0.00%)   1356 (100%)   0 (0.00%)            
##     yes        0 (0.00%)   1342 (100%)   0 (0.00%)   916 (100%)            
## black:                                                            <0.001   
##     yes       1623 (82.8%) 986 (73.5%)  1148 (84.7%) 730 (79.7%)           
##     no        337 (17.2%)  356 (26.5%)  208 (15.3%)  186 (20.3%)           
## ¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯

The table speaks for itself. Results that utilize factor variables have proportions to them. For example, in cluster 1, 1289 people or 65.8% responded “no” that the have an individual deductible plan (idp). Numerical variables have the mean with the standard deviation in parentheses. For example, in cluster 1 the average family size was 1 with a standard deviation of 1.05 (lfam).

Conclusion

Mixed data can be partition into clusters with the help of the gower or another coefficient. In addition, kmeans is not the only way to cluster the data. There are other choices such as the partitioning around medoids. The example provided here simply serves as a basic introduction to this.

Hierarchical Clustering in R

Hierarchical clustering is a form of unsupervised learning. What this means is that the data points lack any form of label and the purpose of the analysis is to generate labels for our data points. IN other words, we have no Y values in our data.

Hierarchical clustering is an agglomerative technique. This means that each data point starts as their own individual clusters and are merged over iterations. This is great for small datasets but is difficult to scale. In addition, you need to set the linkage which is used to place observations in different clusters. There are several choices (ward, complete, single, etc.) and the best choice depends on context.

In this post, we will make a hierarchical clustering analysis of the “MedExp” data from the “Ecdat” package. We are trying to identify distinct subgroups in the sample. The actual hierarchical cluster creates what is a called a dendrogram. Below is some initial code.

library(cluster);library(compareGroups);library(NbClust);library(HDclassif);library(sparcl);library(Ecdat)
data("MedExp")
str(MedExp)
## 'data.frame':    5574 obs. of  15 variables:
##  $ med     : num  62.1 0 27.8 290.6 0 ...
##  $ lc      : num  0 0 0 0 0 0 0 0 0 0 ...
##  $ idp     : Factor w/ 2 levels "no","yes": 2 2 2 2 2 2 2 2 1 1 ...
##  $ lpi     : num  6.91 6.91 6.91 6.91 6.11 ...
##  $ fmde    : num  0 0 0 0 0 0 0 0 0 0 ...
##  $ physlim : Factor w/ 2 levels "no","yes": 1 1 1 1 1 2 1 1 1 1 ...
##  $ ndisease: num  13.7 13.7 13.7 13.7 13.7 ...
##  $ health  : Factor w/ 4 levels "excellent","good",..: 2 1 1 2 2 2 2 1 2 2 ...
##  $ linc    : num  9.53 9.53 9.53 9.53 8.54 ...
##  $ lfam    : num  1.39 1.39 1.39 1.39 1.1 ...
##  $ educdec : num  12 12 12 12 12 12 12 12 9 9 ...
##  $ age     : num  43.9 17.6 15.5 44.1 14.5 ...
##  $ sex     : Factor w/ 2 levels "male","female": 1 1 2 2 2 2 2 1 2 2 ...
##  $ child   : Factor w/ 2 levels "no","yes": 1 2 2 1 2 2 1 1 2 1 ...
##  $ black   : Factor w/ 2 levels "yes","no": 2 2 2 2 2 2 2 2 2 2 ...

Currently, for the purposes of this post. The dataset is too big. IF we try to do the analysis with over 5500 observations it will take a long time. Therefore, we will only use the first 1000 observations. In addition, We need to remove factor variables as hierarchical clustering cannot analyze factor variables. Below is the code.

MedExp_small<-MedExp[1:1000,]
MedExp_small$sex<-NULL
MedExp_small$idp<-NULL
MedExp_small$child<-NULL
MedExp_small$black<-NULL
MedExp_small$physlim<-NULL
MedExp_small$health<-NULL

We now need to scale are data. This is important because different scales will cause different variables to have more or less influence on the results. Below is the code

MedExp_small_df<-as.data.frame(scale(MedExp_small))

We now need to determine how many clusters to create. There is no rule on this but we can use statistical analysis to help us. The “NbClust” package will conduct several different analysis to provide a suggested number of clusters to create. You have to set the distance, min/max number of clusters, the method, and the index. The graphs can be understood by looking for the bend or elbow in them. At this point is the best number of clusters.

numComplete<-NbClust(MedExp_small_df,distance = 'euclidean',min.nc = 2,max.nc = 8,method = 'ward.D2',index = c('all'))

1.png

## *** : The Hubert index is a graphical method of determining the number of clusters.
##                 In the plot of Hubert index, we seek a significant knee that corresponds to a 
##                 significant increase of the value of the measure i.e the significant peak in Hubert
##                 index second differences plot. 
## 

1.png

## *** : The D index is a graphical method of determining the number of clusters. 
##                 In the plot of D index, we seek a significant knee (the significant peak in Dindex
##                 second differences plot) that corresponds to a significant increase of the value of
##                 the measure. 
##  
## ******************************************************************* 
## * Among all indices:                                                
## * 7 proposed 2 as the best number of clusters 
## * 9 proposed 3 as the best number of clusters 
## * 6 proposed 6 as the best number of clusters 
## * 1 proposed 8 as the best number of clusters 
## 
##                    ***** Conclusion *****                            
##  
## * According to the majority rule, the best number of clusters is  3 
##  
##  
## *******************************************************************
numComplete$Best.nc
##                     KL       CH Hartigan     CCC    Scott      Marriot
## Number_clusters 2.0000   2.0000   6.0000  8.0000    3.000 3.000000e+00
## Value_Index     2.9814 292.0974  56.9262 28.4817 1800.873 4.127267e+24
##                   TrCovW   TraceW Friedman   Rubin Cindex     DB
## Number_clusters      6.0   6.0000   3.0000  6.0000  2.000 3.0000
## Value_Index     166569.3 265.6967   5.3929 -0.0913  0.112 1.0987
##                 Silhouette   Duda PseudoT2  Beale Ratkowsky     Ball
## Number_clusters     2.0000 2.0000   2.0000 2.0000    6.0000    3.000
## Value_Index         0.2809 0.9567  16.1209 0.2712    0.2707 1435.833
##                 PtBiserial Frey McClain   Dunn Hubert SDindex Dindex
## Number_clusters     6.0000    1   3.000 3.0000      0  3.0000      0
## Value_Index         0.4102   NA   0.622 0.1779      0  1.9507      0
##                   SDbw
## Number_clusters 3.0000
## Value_Index     0.5195

Simple majority indicates that three clusters is most appropriate. However, four clusters are probably just as good. Every time you do the analysis you will get slightly different results unless you set the seed.

To make our actual clusters we need to calculate the distances between clusters using the “dist” function while also specifying the way to calculate it. We will calculate distance using the “Euclidean” method. Then we will take the distance’s information and make the actual clustering using the ‘hclust’ function. Below is the code.

distance<-dist(MedExp_small_df,method = 'euclidean')
hiclust<-hclust(distance,method = 'ward.D2')

We can now plot the results. We will plot “hiclust” and set hang to -1 so this will place the observations at the bottom of the plot. Next, we use the “cutree” function to identify 4 clusters and store this in the “comp” variable. Lastly, we use the “ColorDendrogram” function to highlight are actual clusters.

plot(hiclust,hang=-1, labels=F)
comp<-cutree(hiclust,4) ColorDendrogram(hiclust,y=comp,branchlength = 100)

1.jpeg

We can also create some descriptive stats such as the number of observations per cluster.

table(comp)
## comp
##   1   2   3   4 
## 439 203 357   1

We can also make a table that looks at the descriptive stats by cluster by using the “aggregate” function.

aggregate(MedExp_small_df,list(comp),mean)
##   Group.1         med         lc        lpi       fmde     ndisease
## 1       1  0.01355537 -0.7644175  0.2721403 -0.7498859  0.048977122
## 2       2 -0.06470294 -0.5358340 -1.7100649 -0.6703288 -0.105004408
## 3       3 -0.06018129  1.2405612  0.6362697  1.3001820 -0.002099968
## 4       4 28.66860936  1.4732183  0.5252898  1.1117244  0.564626907
##          linc        lfam    educdec         age
## 1  0.12531718 -0.08861109  0.1149516  0.12754008
## 2 -0.44435225  0.22404456 -0.3767211 -0.22681535
## 3  0.09804031 -0.01182114  0.0700381 -0.02765987
## 4  0.18887531 -2.36063161  1.0070155 -0.07200553

Cluster 1 is the most educated (‘educdec’). Cluster 2 stands out as having higher medical cost (‘med’), chronic disease (‘ndisease’) and age. Cluster 3 had the lowest annual incentive payment (‘lpi’). Cluster 4 had the highest coinsurance rate (‘lc’). You can make boxplots of each of the stats above. Below is just an example of age by cluster.

MedExp_small_df$cluster<-comp
boxplot(age~cluster,MedExp_small_df)

1.jpeg

Conclusion

Hierarchical clustering is one way in which to provide labels for data that does not have labels. The main challenge is determining how many clusters to create. However, this can be dealt with through using recommendations that come from various functions in R.

Using H2o Deep Learning in R

Deep learning is a complex machine learning concept in which new features are created new features from the variables that were inputted. These new features are used for classifying labeled data. This all done mostly with artificial neural networks that are multiple layers deep and can involve regularization.

If understanding is not important but you are in search of the most accurate classification possible deep learning is a useful tool. It is nearly impossible to explain to the typical stakeholder and is best for just getting the job done.

One of the most accessible packages for using deep learning is the “h2o” package.This package allows you to access the H2O website which will analyze your data and send it back to you. This allows a researcher to do analytics on a much larger scale than their own computer can handle. In this post, we will use deep learning to predict the gender of the head of household in the “VietnamH” dataset from the “Ecdat” package. Below is some initial code.

Data Preparation

library(h2o);library(Ecdat);library(corrplot)
data("VietNamH")
str(VietNamH)
## 'data.frame':    5999 obs. of  11 variables:
##  $ sex     : Factor w/ 2 levels "male","female": 2 2 1 2 2 2 2 1 1 1 ...
##  $ age     : int  68 57 42 72 73 66 73 46 50 45 ...
##  $ educyr  : num  4 8 14 9 1 13 2 9 12 12 ...
##  $ farm    : Factor w/ 2 levels "yes","no": 2 2 2 2 2 2 2 2 2 2 ...
##  $ urban   : Factor w/ 2 levels "no","yes": 2 2 2 2 2 2 2 2 2 2 ...
##  $ hhsize  : int  6 6 6 6 8 7 9 4 5 4 ...
##  $ lntotal : num  10.1 10.3 10.9 10.3 10.5 ...
##  $ lnmed   : num  11.23 8.51 8.71 9.29 7.56 ...
##  $ lnrlfood: num  8.64 9.35 10.23 9.26 9.59 ...
##  $ lnexp12m: num  11.23 8.51 8.71 9.29 7.56 ...
##  $ commune : Factor w/ 194 levels "1","10","100",..: 1 1 1 1 1 1 1 1 1 1 ...
corrplot(cor(na.omit(VietNamH[,c(-1,-4,-5,-11)])),method = 'number')

1.png

We need to remove the “commune” variable “lnexp12m” and the “lntotal” variable. The “commune” variable should be removed because it doesn’t provide much information. The “lntotal” variable should be removed because it is the total expenditures that the family spends. This is represented by other variables such as food “lnrlfood” which “lntotal” highly correlates with. the “lnexp12m” should be removed because it has a perfect correlation with “lnmed”. Below is the code

VietNamH$commune<-NULL
VietNamH$lnexp12m<-NULL
VietNamH$lntotal<-NULL

Save as CSV file

We now need to save our modified dataset as a csv file that we can send to h2o. The code is as follows.

write.csv(VietNamH, file="viet.csv",row.names = F)

Connect to H2O

Now we can connect to H2o and start what is called an instance.

localH2O<-h2o.init()
##  Connection successful!
## 
## R is connected to the H2O cluster: 
##     H2O cluster uptime:         50 minutes 18 seconds 
##     H2O cluster version:        3.10.4.6 
##     H2O cluster version age:    27 days  
##     H2O cluster name:           H2O_started_from_R_darrin_hsl318 
##     H2O cluster total nodes:    1 
##     H2O cluster total memory:   3.44 GB 
##     H2O cluster total cores:    4 
##     H2O cluster allowed cores:  2 
##     H2O cluster healthy:        TRUE 
##     H2O Connection ip:          localhost 
##     H2O Connection port:        54321 
##     H2O Connection proxy:       NA 
##     H2O Internal Security:      FALSE 
##     R Version:                  R version 3.4.0 (2017-04-21)

The output indicates that we are connected. The next step is where it really gets complicated. We need to upload our data to h2o as an h2o dataframe, which is different from a regular data frame. We also need to indicate the location of the csv file on your computer that needs to be converted. All of this is done in the code below.

viet.hex<-h2o.uploadFile(path="/home/darrin/Documents/R working directory/blog/blog/viet.csv",destination_frame = "viet.hex")

In the code above we create an object called “viet.hex”. This object uses the “h2o.uploadFile” function to send our csv to h2o. We can check if everything worked by using the “class” function and the “str” function on “viet.hex”.

class(viet.hex)
## [1] "H2OFrame"
str(viet.hex)
## Class 'H2OFrame'  
##  - attr(*, "op")= chr "Parse"
##  - attr(*, "id")= chr "viet.hex"
##  - attr(*, "eval")= logi FALSE
##  - attr(*, "nrow")= int 5999
##  - attr(*, "ncol")= int 8
##  - attr(*, "types")=List of 8
##   ..$ : chr "enum"
##   ..$ : chr "int"
##   ..$ : chr "real"
##   ..$ : chr "enum"
##   ..$ : chr "enum"
##   ..$ : chr "int"
##   ..$ : chr "real"
##   ..$ : chr "real"
##  - attr(*, "data")='data.frame': 10 obs. of  8 variables:
##   ..$ sex     : Factor w/ 2 levels "female","male": 1 1 2 1 1 1 1 2 2 2
##   ..$ age     : num  68 57 42 72 73 66 73 46 50 45
##   ..$ educyr  : num  4 8 14 9 1 13 2 9 12 12
##   ..$ farm    : Factor w/ 2 levels "no","yes": 1 1 1 1 1 1 1 1 1 1
##   ..$ urban   : Factor w/ 2 levels "no","yes": 2 2 2 2 2 2 2 2 2 2
##   ..$ hhsize  : num  6 6 6 6 8 7 9 4 5 4
##   ..$ lnmed   : num  11.23 8.51 8.71 9.29 7.56 ...
##   ..$ lnrlfood: num  8.64 9.35 10.23 9.26 9.59 ...

The “summary” function also provides insight into the data.

summary(viet.hex)
##  sex          age             educyr           farm      urban    
##  male  :4375  Min.   :16.00   Min.   : 0.000   yes:3438  no :4269 
##  female:1624  1st Qu.:37.00   1st Qu.: 3.982   no :2561  yes:1730 
##               Median :46.00   Median : 6.996                      
##               Mean   :48.01   Mean   : 7.094                      
##               3rd Qu.:58.00   3rd Qu.: 9.988                      
##               Max.   :95.00   Max.   :22.000                      
##  hhsize           lnmed            lnrlfood        
##  Min.   : 1.000   Min.   : 0.000   Min.   : 6.356  
##  1st Qu.: 4.000   1st Qu.: 4.166   1st Qu.: 8.372  
##  Median : 5.000   Median : 5.959   Median : 8.689  
##  Mean   : 4.752   Mean   : 5.266   Mean   : 8.680  
##  3rd Qu.: 6.000   3rd Qu.: 7.171   3rd Qu.: 9.001  
##  Max.   :19.000   Max.   :12.363   Max.   :11.384

Create Training and Testing Sets

We now need to create our train and test sets. We need to use slightly different syntax to do this with h2o. The code below is how it is done to create a 70/30 split in the data.

rand<-h2o.runif(viet.hex,seed = 123)
train<-viet.hex[rand<=.7,]
train<-h2o.assign(train, key = "train")
test<-viet.hex[rand>.7,]
test<-h2o.assign(test, key = "test")

Here is what we did

  1. We created an object called “rand” that created random numbers for or “viet.hex” dataset.
  2. All values less than .7 were assigned to the “train” variable
  3. The train variable was given the key name “train” in order to use it in the h2o framework
  4. All values greater than .7 were assigned to test and test was given a key name

You can check the proportions of the train and test sets using the “h2o.table” function.

h2o.table(train$sex)
##      sex Count
## 1 female  1146
## 2   male  3058
## 
## [2 rows x 2 columns]
h2o.table(test$sex)
##      sex Count
## 1 female   478
## 2   male  1317
## 
## [2 rows x 2 columns]

Model Development

We can now create our model.

vietdlmodel<-h2o.deeplearning(x=2:8,y=1,training_frame = train,validation_frame = test,seed=123,variable_importances = T)

Here is what the code above means.

  1. We created an object called “vietdlmodel”
  2. We used the “h2o.deeplearning” function.
  3.  x = 2:8 is all the independent variables in the dataframe and y=1 is the first variable “sex”
  4. We set the training and validation frame to “train” and “test” and set the seed.
  5. Finally, we indicated that we want to know the variable importance.

We can check the performance of the model with the code below.

vietdlmodel
## Model Details:
## training
## Confusion Matrix (vertical: actual; across: predicted) for F1-optimal threshold:
##        female male    Error       Rate
## female    435  711 0.620419  =711/1146
## male      162 2896 0.052976  =162/3058
## Totals    597 3607 0.207659  =873/4204

## testing
## Confusion Matrix (vertical: actual; across: predicted) for F1-optimal threshold:
##        female male    Error       Rate
## female    151  327 0.684100   =327/478
## male       60 1257 0.045558   =60/1317
## Totals    211 1584 0.215599  =387/1795

There is a lot of output here. For simplicity, we will focus on the confusion matrices for the training and testing sets.The error rate for the training set is 19.8% and for the testing set, it is 21.2%. Below we can see which variable were most useful

vietdlmodel@model$variable_importances
## Variable Importances: 
##             variable relative_importance scaled_importance percentage
## 1           urban.no            1.000000          1.000000   0.189129
## 2          urban.yes            0.875128          0.875128   0.165512
## 3            farm.no            0.807208          0.807208   0.152666
## 4           farm.yes            0.719517          0.719517   0.136081
## 5                age            0.451581          0.451581   0.085407
## 6             hhsize            0.410472          0.410472   0.077632
## 7           lnrlfood            0.386189          0.386189   0.073039
## 8             educyr            0.380398          0.380398   0.071944
## 9              lnmed            0.256911          0.256911   0.048589
## 10  farm.missing(NA)            0.000000          0.000000   0.000000
## 11 urban.missing(NA)            0.000000          0.000000   0.000000

The numbers speak for themselves. “Urban” and “farm” are both the most important variables for predicting sex. Below is the code for obtaining the predicted results and placing them into a dataframe. This is useful if you need to send in final results to a data science competition such as those found at kaggle.

vietdlPredict<-h2o.predict(vietdlmodel,newdata = test)
vietdlPredict
##   predict     female      male
## 1    male 0.06045560 0.9395444
## 2    male 0.10957121 0.8904288
## 3    male 0.27459108 0.7254089
## 4    male 0.14721353 0.8527865
## 5    male 0.05493486 0.9450651
## 6    male 0.10598351 0.8940165
## 
## [1795 rows x 3 columns]
vietdlPred<-as.data.frame(vietdlPredict)
head(vietdlPred)
##   predict     female      male
## 1    male 0.06045560 0.9395444
## 2    male 0.10957121 0.8904288
## 3    male 0.27459108 0.7254089
## 4    male 0.14721353 0.8527865
## 5    male 0.05493486 0.9450651
## 6    male 0.10598351 0.8940165

Conclusion

This was a complicated experience. However, we learned how to upload and download results from h2.

Gradient Boosting With Random Forest Classification in R

In this blog, we have already discussed and what gradient boosting is. However, for a brief recap, gradient boosting improves model performance by first developing an initial model called the base learner using whatever algorithm of your choice (linear, tree, etc.).

What follows next is that gradient boosting looks at the error in the first model and develops a second model using what is called the loss function. The loss function calculates the difference between the current accuracy and the desired prediction whether it’s accuracy for classification or error in regression. This process is repeated with the creation of additional models until a certain level of accuracy or reduction in error is attained.

This post what provide an example of the use of gradient boosting in random forest classification. Specifically, we will try to predict a person’s labor participation based on several independent variables.

library(randomForest);library(gbm);library(caret);library(Ecdat)
data("Participation")
str(Participation)
## 'data.frame':    872 obs. of  7 variables:
##  $ lfp    : Factor w/ 2 levels "no","yes": 1 2 1 1 1 2 1 2 1 1 ...
##  $ lnnlinc: num  10.8 10.5 11 11.1 11.1 ...
##  $ age    : num  3 4.5 4.6 3.1 4.4 4.2 5.1 3.2 3.9 4.3 ...
##  $ educ   : num  8 8 9 11 12 12 8 8 12 11 ...
##  $ nyc    : num  1 0 0 2 0 0 0 0 0 0 ...
##  $ noc    : num  1 1 0 0 2 1 0 2 0 2 ...
##  $ foreign: Factor w/ 2 levels "no","yes": 1 1 1 1 1 1 1 1 1 1 ...

Data Preparation

We need to transform the ‘age’ variable by multiplying by ten so that the ages are realistic. In addition, we need to convert “lnnlinc” from the log of salary to regular salary. Below is the code to transform these two variables.

Participation$age<-10*Participation$age #normal age
Participation$lnnlinc<-exp(Participation$lnnlinc) #actual income not log

We can now create our train and test datasets

set.seed(502)
ind=sample(2,nrow(Participation),replace=T,prob=c(.7,.3))
train<-Participation[ind==1,]
test<-Participation[ind==2,]

We now need to create our grid and control. The grid allows us to create several different models with various parameter settings. This is important in determining what is the most appropriate model which is always determined by comparing. We are using random forest so we need to set the number of trees we desire, the depth of the trees, the shrinkage which controls the influence of each tree, and the minimum number of observations in a node. The control will allow us to set the cross-validation. Below is the code for the creation of the grid and control.

grid<-expand.grid(.n.trees=seq(200,500,by=200),.interaction.depth=seq(1,3,by=2),.shrinkage=seq(.01,.09,by=.04),
                   .n.minobsinnode=seq(1,5,by=2)) #grid features
control<-trainControl(method="CV",number = 10) #control

Parameter Selection

Now we set our seed and run the gradient boosted model.

set.seed(123)
gbm.lfp.train<-train(lfp~.,data=train,method='gbm',trControl=control,tuneGrid=grid)
gbm.lfp.train
## Stochastic Gradient Boosting 
## 
## 636 samples
##   6 predictors
##   2 classes: 'no', 'yes' 
## 
## No pre-processing
## Resampling: Cross-Validated (10 fold) 
## Summary of sample sizes: 573, 573, 571, 572, 573, 572, ... 
## Resampling results across tuning parameters:
## 
##   shrinkage  interaction.depth  n.minobsinnode  n.trees  Accuracy 
##   0.01       1                  1               200      0.6666026
##   0.01       1                  1               400      0.6823306
##   0.01       1                  3               200      0.6588637
##   0.01       1                  3               400      0.6854804
##   0.01       1                  5               200      0.6792769
##   0.01       1                  5               400      0.6823306
##   0.01       3                  1               200      0.6730044
##   0.01       3                  1               400      0.6572051
##   0.01       3                  3               200      0.6793273
##   0.01       3                  3               400      0.6697787
##   0.01       3                  5               200      0.6682914
##   0.01       3                  5               400      0.6650416
##   0.05       1                  1               200      0.6759558
##   0.05       1                  1               400      0.6508040
##   0.05       1                  3               200      0.6681426
##   0.05       1                  3               400      0.6602286
##   0.05       1                  5               200      0.6680441
##   0.05       1                  5               400      0.6570788
##   0.05       3                  1               200      0.6493662
##   0.05       3                  1               400      0.6603518
##   0.05       3                  3               200      0.6540545
##   0.05       3                  3               400      0.6366911
##   0.05       3                  5               200      0.6712428
##   0.05       3                  5               400      0.6445299
##   0.09       1                  1               200      0.6461405
##   0.09       1                  1               400      0.6634768
##   0.09       1                  3               200      0.6571036
##   0.09       1                  3               400      0.6320765
##   0.09       1                  5               200      0.6554922
##   0.09       1                  5               400      0.6540755
##   0.09       3                  1               200      0.6523920
##   0.09       3                  1               400      0.6430140
##   0.09       3                  3               200      0.6430666
##   0.09       3                  3               400      0.6447749
##   0.09       3                  5               200      0.6540522
##   0.09       3                  5               400      0.6524416
##   Kappa    
##   0.3210036
##   0.3611194
##   0.3032151
##   0.3667274
##   0.3472079
##   0.3603046
##   0.3414686
##   0.3104335
##   0.3542736
##   0.3355582
##   0.3314006
##   0.3258459
##   0.3473532
##   0.2961782
##   0.3310251
##   0.3158762
##   0.3308353
##   0.3080692
##   0.2940587
##   0.3170198
##   0.3044814
##   0.2692627
##   0.3378545
##   0.2844781
##   0.2859754
##   0.3214156
##   0.3079460
##   0.2585840
##   0.3062307
##   0.3044324
##   0.3003943
##   0.2805715
##   0.2827956
##   0.2861825
##   0.3024944
##   0.3002135
## 
## Accuracy was used to select the optimal model using  the largest value.
## The final values used for the model were n.trees = 400,
##  interaction.depth = 1, shrinkage = 0.01 and n.minobsinnode = 3.

Gradient boosting provides us with the recommended parameters for our training model as shown above as well as the accuracy and kappa of each model. We also need to recode the dependent variable as 0 and 1 for the ‘gbm’ function.

Model Training

train$lfp=ifelse(train$lfp=="no",0,1)
gbm.lfp<-gbm(lfp~., distribution = 'bernoulli',data=train,n.trees = 400,interaction.depth = 1,shrinkage=.01,n.minobsinnode = 3)

You can see a summary of the most important variables for prediction as well as a plot by using the “summary” function.

summary(gbm.lfp)

1.png

##             var   rel.inf
## lnnlinc lnnlinc 28.680447
## age         age 27.451474
## foreign foreign 23.307932
## nyc         nyc 18.375856
## educ       educ  2.184291
## noc         noc  0.000000

Salary (lnnlinc), age and foreigner status are the most important predictors followed by the number of younger children (nyc) and last education. The number of older children (noc) has no effect. We can now test our model on the test set.

Model Testing

gbm.lfp.test<-predict(gbm.lfp,newdata = test,type = 'response', n.trees = 400)

Our test model returns a set of probabilities. We need to convert this to a simple yes or no and this is done in the code below.

gbm.class<-ifelse(gbm.lfp.test<0.5,'no','yes')

We can now look at a table to see how accurate our model is as well as calculate the accuracy.

table(gbm.class,test$lfp)
##          
## gbm.class no yes
##       no  91  39
##       yes 39  67
(accuracy<-(91+67)/(91+67+39+39))
## [1] 0.6694915

The model is not great. However, you now have an example of how to use gradient boosting to develop a random forest classification model

Gradient Boosting Of Regression Trees in R

Gradient boosting is a machine learning tool for “boosting” or improving model performance. How this works is that you first develop an initial model called the base learner using whatever algorithm of your choice (linear, tree, etc.).

Gradient boosting looks at the error and develops a second model using what is called da loss function. The loss function is the difference between the current accuracy and the desired prediction whether it’s accuracy for classification or error in regression. This process of making additional models based only on the misclassified ones continues until the level of accuracy is reached.

Gradient boosting is also stochastic. This means that it randomly draws from the sample as it iterates over the data. This helps to improve accuracy and or reduce error.

In this post, we will use gradient boosting for regression trees. In particular, we will use the “Sacramento” dataset from the “caret” package. Our goal is to predict a house’s price based on the available variables. Below is some initial code

library(caret);library(gbm);library(corrplot)
data("Sacramento")
str(Sacramento)
## 'data.frame':    932 obs. of  9 variables:
##  $ city     : Factor w/ 37 levels "ANTELOPE","AUBURN",..: 34 34 34 34 34 34 34 34 29 31 ...
##  $ zip      : Factor w/ 68 levels "z95603","z95608",..: 64 52 44 44 53 65 66 49 24 25 ...
##  $ beds     : int  2 3 2 2 2 3 3 3 2 3 ...
##  $ baths    : num  1 1 1 1 1 1 2 1 2 2 ...
##  $ sqft     : int  836 1167 796 852 797 1122 1104 1177 941 1146 ...
##  $ type     : Factor w/ 3 levels "Condo","Multi_Family",..: 3 3 3 3 3 1 3 3 1 3 ...
##  $ price    : int  59222 68212 68880 69307 81900 89921 90895 91002 94905 98937 ...
##  $ latitude : num  38.6 38.5 38.6 38.6 38.5 ...
##  $ longitude: num  -121 -121 -121 -121 -121 ...

Data Preparation

Already there are some actions that need to be made. We need to remove the variables “city” and “zip” because they both have a large number of factors. Next, we need to remove “latitude” and “longitude” because these values are hard to interpret in a housing price model. Let’s run the correlations before removing this information

corrplot(cor(Sacramento[,c(-1,-2,-6)]),method = 'number')

1

There also appears to be a high correlation between “sqft” and beds and bathrooms. As such, we will remove “sqft” from the model. Below is the code for the revised variables remaining for the model.

sacto.clean<-Sacramento
sacto.clean[,c(1,2,5)]<-NULL
sacto.clean[,c(5,6)]<-NULL
str(sacto.clean)
## 'data.frame':    932 obs. of  4 variables:
##  $ beds : int  2 3 2 2 2 3 3 3 2 3 ...
##  $ baths: num  1 1 1 1 1 1 2 1 2 2 ...
##  $ type : Factor w/ 3 levels "Condo","Multi_Family",..: 3 3 3 3 3 1 3 3 1 3 ...
##  $ price: int  59222 68212 68880 69307 81900 89921 90895 91002 94905 98937 ...

We will now develop our training and testing sets

set.seed(502)
ind=sample(2,nrow(sacto.clean),replace=T,prob=c(.7,.3))
train<-sacto.clean[ind==1,]
test<-sacto.clean[ind==2,]

We need to create a grid in order to develop the many different potential models available. We have to tune three different parameters for gradient boosting, These three parameters are number of trees, interaction depth, and shrinkage. Number of trees is how many trees gradient boosting g will make, interaction depth is the number of splits, shrinkage controls the contribution of each tree and stump to the final model. We also have to determine the type of cross-validation using the “trainControl”” function. Below is the code for the grid.

grid<-expand.grid(.n.trees=seq(100,500,by=200),.interaction.depth=seq(1,4,by=1),.shrinkage=c(.001,.01,.1),
                  .n.minobsinnode=10)
control<-trainControl(method = "CV")

Model Training

We now can train our model

gbm.train<-train(price~.,data=train,method='gbm',trControl=control,tuneGrid=grid)
gbm.train
Stochastic Gradient Boosting 

685 samples
  4 predictors

No pre-processing
Resampling: Cross-Validated (25 fold) 
Summary of sample sizes: 659, 657, 658, 657, 657, 657, ... 
Resampling results across tuning parameters:

  shrinkage  interaction.depth  n.trees  RMSE       Rsquared 
  0.001      1                  100      128372.32  0.4850879
  0.001      1                  300      120272.16  0.4965552
  0.001      1                  500      113986.08  0.5064680
  0.001      2                  100      127197.20  0.5463527
  0.001      2                  300      117228.42  0.5524074
  0.001      2                  500      109634.39  0.5566431
  0.001      3                  100      126633.35  0.5646994
  0.001      3                  300      115873.67  0.5707619
  0.001      3                  500      107850.02  0.5732942
  0.001      4                  100      126361.05  0.5740655
  0.001      4                  300      115269.63  0.5767396
  0.001      4                  500      107109.99  0.5799836
  0.010      1                  100      103554.11  0.5286663
  0.010      1                  300       90114.05  0.5728993
  0.010      1                  500       88327.15  0.5838981
  0.010      2                  100       97876.10  0.5675862
  0.010      2                  300       88260.16  0.5864650
  0.010      2                  500       86773.49  0.6007150
  0.010      3                  100       96138.06  0.5778062
  0.010      3                  300       87213.34  0.5975438
  0.010      3                  500       86309.87  0.6072987
  0.010      4                  100       95260.93  0.5861798
  0.010      4                  300       86962.20  0.6011429
  0.010      4                  500       86380.39  0.6082593
  0.100      1                  100       86808.91  0.6022690
  0.100      1                  300       86081.65  0.6100963
  0.100      1                  500       86197.52  0.6081493
  0.100      2                  100       86810.97  0.6036919
  0.100      2                  300       87251.66  0.6042293
  0.100      2                  500       88396.21  0.5945206
  0.100      3                  100       86649.14  0.6088309
  0.100      3                  300       88565.35  0.5942948
  0.100      3                  500       89971.44  0.5849622
  0.100      4                  100       86922.22  0.6037571
  0.100      4                  300       88629.92  0.5894188
  0.100      4                  500       91008.39  0.5718534

Tuning parameter 'n.minobsinnode' was held constant at a value of 10
RMSE was used to select the optimal model using  the smallest value.
The final values used for the model were n.trees = 300, interaction.depth = 1, shrinkage = 0.1 and n.minobsinnode = 10.

The printout shows you the values for each potential model. At the bottom of the printout are the recommended parameters for our model. We take the values at the bottom to create our model for the test data.

gbm.price<-gbm(price~.,data=train,n.trees = 300,interaction.depth = 1,
              shrinkage = .1,distribution = 'gaussian')

Test Model

Now we use the test data, below we predict as well as calculate the error and make a plot.

gbm.test<-predict(gbm.price,newdata = test,n.trees = 300)
gbm.resid<-gbm.test-test$price
mean(gbm.resid^2)
## [1] 8721772767
plot(gbm.test,test$price)

1

The actual value for the mean squared error is relative and means nothing by its self. The plot, however, looks good and indicates that our model may be doing well. The mean squared error is only useful when comparing one model to another it does not mean much by its self.

Random Forest Classification in R

This post will cover the use of random forest for classification. Random forest involves the use of many decision trees in the development of a classification or regression tree. The results of each individual tree are added together and the mean is used in the final classification of an example. The use of an ensemble helps in dealing with the bias-variance tradeoff.

In the example of random forest classification, we will use the “Participation” dataset from the “ecdat” package. We want to classify people by their labor participation based on the other variables available in the dataset. Below is some initial code

library(randomForest);library(Ecdat)
data("Participation")
str(Participation)
## 'data.frame':    872 obs. of  7 variables:
##  $ lfp    : Factor w/ 2 levels "no","yes": 1 2 1 1 1 2 1 2 1 1 ...
##  $ lnnlinc: num  10.8 10.5 11 11.1 11.1 ...
##  $ age    : num  3 4.5 4.6 3.1 4.4 4.2 5.1 3.2 3.9 4.3 ...
##  $ educ   : num  8 8 9 11 12 12 8 8 12 11 ...
##  $ nyc    : num  1 0 0 2 0 0 0 0 0 0 ...
##  $ noc    : num  1 1 0 0 2 1 0 2 0 2 ...
##  $ foreign: Factor w/ 2 levels "no","yes": 1 1 1 1 1 1 1 1 1 1 ...

For the data preparation, we need to multiple age by ten as the current values imply small children. Furthermore, we need to change the “lnnlinc” variable from the log of salary to just the regular salary. After completing these two steps, we need to split our data into training and testing sets. Below is the code

Participation$age<-10*Participation$age #normal age
Participation$lnnlinc<-exp(Participation$lnnlinc) #actual income not log
#split data
set.seed(502)
ind=sample(2,nrow(Participation),replace=T,prob=c(.7,.3))
train<-Participation[ind==1,]
test<-Participation[ind==2,]

We will now create our classification model using random forest.

set.seed(123)
rf.lfp<-randomForest(lfp~.,data = train)
rf.lfp
## 
## Call:
##  randomForest(formula = lfp ~ ., data = train) 
##                Type of random forest: classification
##                      Number of trees: 500
## No. of variables tried at each split: 2
## 
##         OOB estimate of  error rate: 32.39%
## Confusion matrix:
##      no yes class.error
## no  248  93   0.2727273
## yes 113 182   0.3830508

The output is mostly self-explanatory. It includes the number of trees, number of variables at each split, error rate, and the confusion matrix. In general, are error rate is poor and we are having a hard time distinguishing between those who work and do not work based on the variables in the dataset. However, this is based on having all 500 trees in the analysis. Having this many trees is probably not necessary but we need to confirm this.

We can also plot the error by tree using the “plot” function as shown below.

plot(rf.lfp)

1.png

It looks as though error lowest with around 400 trees. We can confirm this using the “which.min” function and call information from “err.rate” in our model.

which.min(rf.lfp$err.rate[,1])
## [1] 242

We need 395 trees in order to reduce the error rate to its most optimal level. We will now create a new model that contains 395 trees in it.

rf.lfp2<-randomForest(lfp~.,data = train,ntree=395)
rf.lfp2
## 
## Call:
##  randomForest(formula = lfp ~ ., data = train, ntree = 395) 
##                Type of random forest: classification
##                      Number of trees: 395
## No. of variables tried at each split: 2
## 
##         OOB estimate of  error rate: 31.92%
## Confusion matrix:
##      no yes class.error
## no  252  89   0.2609971
## yes 114 181   0.3864407

The results are mostly the same. There is a small decline in error but not much to get excited about. We will now run our model on the test set.

rf.lfptest<-predict(rf.lfp2,newdata=test,type = 'response')
table(rf.lfptest,test$lfp)
##           
## rf.lfptest no yes
##        no  93  48
##        yes 37  58
(92+63)/(92+63+43+38) #calculate accuracy
## [1] 0.6567797

Still disappointing, there is one last chart we should examine and that is the importance of each variable plot. It shows which variables are most useful in the prediction process. Below is the code.

varImpPlot(rf.lfp2)

1.png

This plot clearly indicates that salary (“lnnlinc”), age, and education are the strongest features for classifying by labor activity. However, the overall model is probably not useful.

Conclusion

This post explained and demonstrated how to conduct a random forest analysis. This form of analysis is powerful in dealing with large datasets with nonlinear relationships among the variables.