# Tag Archives: independent and dependent variables # Variable Selection in Python

A key concept in machine learning and data science in general is variable selection. Sometimes, a dataset can have hundreds of variables to include in a model. The benefit of variable selection is that it reduces the amount of useless information aka noise in the model. By removing noise it can improve the learning process and help to stabilize the estimates.

In this post, we will look at two ways to do this.  These two common approaches are the univariate approach and the greedy approach. The univariate approach selects variables that are most related to the dependent variable based on a metric. The greedy approach will alone remove a variable if getting rid of it does not affect the model’s performance.

We will now move to our first example which is the univariate approach using Python. We will use the VietNamH dataset from the pydataset library. Are goal is to predict how much a family spends on medical expenses. Below is the initial code.

`import pandas as pdimport numpy as npfrom pydataset import datafrom sklearn.linear_model import LinearRegressionfrom sklearn.feature_selection import SelectPercentilefrom sklearn.feature_selection import f_regressiondf=data('VietNamH').dropna()`

Are data is called df. If you use the head function, you will see that we need to convert several variables to dummy variables. Below is the code for doing this.

`df.loc[df.sex== 'female', 'sex'] = 0df.loc[df.sex== 'male','sex'] = 1df.loc[df.farm== 'no', 'farm'] = 0df.loc[df.farm== 'yes','farm'] = 1df.loc[df.urban== 'no', 'urban'] = 0df.loc[df.urban== 'yes','urban'] = 1`

We now need to setup or X and y datasets as shown below

`X=df[['age','educyr','sex','hhsize','farm','urban','lnrlfood']]y=df['lnmed']`

We are now ready to actual use the univariate approach. This involves the use of two different classes in Python. The SelectPercentile class allows you to only include the variables that meet a certain percentile rank such as 25%. The f_regression class is designed for checking a variable’s performance in the context of regression.  Below is the code to run the analysis.

`selector_f=SelectPercentile(f_regression,percentile=25)selector_f.fit(X,y)`

We can now see the results using a for loop. We want the scores from our selector_f object. To do this we setup a for lop and use the zip function to iterate over the data. The output is placed in the print statement. Below is the code and output for this.

`for n,s in zip(X,selector_f.scores_):    print('F-score: %3.2f\t for feature %s ' % (s,n))F-score: 62.42   for feature age F-score: 33.86   for feature educyr F-score: 3.17    for feature sex F-score: 106.35  for feature hhsize F-score: 14.82   for feature farm F-score: 5.95    for feature urban F-score: 97.77   for feature lnrlfood `

You can see the f-score for all of the independent variables. You can decide for yourself which to include.

Greedy Approach

The greedy approach only removes variables if they do not impact model performance. We are using the same dataset so all we have to do is run the code. We need the RFECV class from the model_selection library. We then use the function RFECV and set the estimator, cross-validation, and scoring metric. Finally, we run the analysis and print the results. The code is below with the output.

`from sklearn.feature_selection import RFECVselect=RFECV(estimator=regression,cv=10,scoring='neg_mean_squared_error')select.fit(X,y)print(select.n_features_)7`

The number 7 represents how many independent variables to include in the model. Since we only had 7 total variables we should include all variables in the model.

Conclusion

With help with univariate and greedy approaches, it is possible to deal with a large number of variables efficiently one developing models. The example here involve only a handful of variables. However, bear in mind that the approaches mentioned here are highly scalable and useful. # Classification of Variables

In addition to the types of variables, there also several ways to classify variables. Two ways to classify variables is experimental and mathematical.

Experimental classification is used to classify variables by the function they  serve in the experiment. In experimental research, we have independent and dependent variables.  Independent variables are variables that are controlled by the researcher and are believed to have an effect on the dependent variable. Dependent variables are affected by the independent variables.

For example, let’s say we want to see how sleep affects GPA. We would manipulate the amount of sleep a person gets, which is the independent variable to see how their GPA changes as GPA is the dependent variable influenced by sleep.

The second type of classification is mathematical. A continuous variable is can assume an infinite number of values. An example would be weight or height.

A discrete variable consists of a finite number of values. Examples include gender and the number of computers. You can’t be half a gender you are a man or woman.

What type of variable to use depends again on the research questions of the study.