Monthly Archives: August 2015

Switch Function in R

Advertisements

The ‘switch’ function in R is useful when you have more than two choices based on a condition. If you remember ‘if’ and ‘ifelse’ are useful when there are two choices for a condition. However, using if/else statements for multiple choices is possible but somewhat confusing. Thus the ‘switch’ function is available for simplicity.

One problem with the ‘switch’ function is that you can not use vectors as the input to the function. This means that you have to manually calculate the value for each data point yourself. There is a way around this but that is the topic of a future post. For now, we will look at how to use the ‘switch’ function.

Switch Function

The last time we discussed R Programming we had setup the “CashDonate” to calculate how much James would give for dollars per point and how much the owner would much James based on whether it was a home or away game. Below is the code.

CashDonate <- function(points, HomeGame=TRUE){
 JamesDonate<- points * ifelse(points > 100, 30, 40)
 totalDonation<- JamesDonate * ifelse(HomeGame, 1.5, 1.3)
 round(totalDonation)
}

Now the team will have several international games outside the country. For these games, the owner will double whatever James donates. We now have three choices in our code.

  • Home game multiple by 1.5
  • Away game multiple by 1.3
  • International game multiple by 2

It is possible to use the ‘if/else’ function but it is complicated to code, especially for beginners. Instead, we will use the ‘switch’ function which allows R to switch between multiple options within the argument. Below is the new code.

CashDonate <- function(points, GameType){
 JamesDonate<- points * ifelse(points > 100, 30, 40)
 OwnerRate<- switch(GameType, Home = 1.5, Away = 1.3, International = 2)
 totalDonation<-JamesDonate * OwnerRate
 round(totalDonation)
}

Here is what the code does

  1. The function ‘CashDonate’ has the arguments ‘points’ and ‘Gametype’
  2. ‘JamesDonate’ is ‘points’ multiply by 30 if more than 100 points are scored or 40 if less than 100 points are scored
  3. ‘OwnerRate’ is new as it uses the ‘switch’ function. If the ‘GameType’ is ‘Home’ the amount from ‘JamesDonate’ is multiplied by 1.5, ‘Away’ is multiplied by 1.3 and ‘International’ is multiplied by 2. The results of this is inputted into ‘totalDonation’
  4. Lastly, the results in ‘totalDonation’ are rounded using the ’round’ function.

Since there are three choices for ‘GameType’ we use the switch function for this. You can input different values into the modified ‘CashDonate’ function. Below are several examples

> CashDonate(88, GameType="Away")
[1] 4576
> CashDonate(130, GameType="International")
[1] 7800
> CashDonate(102, GameType="Home")
[1] 4590

The next time we discuss R programming, I will explain how to use overcome the problem of inputting each value into the function manually.

Quantitative Data Analysis Preparation

Advertisements

There are many different ways to approach data analysis preparation for quantitative studies. This post will provide some insight into how to do this. In particular, we will look at the following steps in quantitative data analysis preparation.

  • Scoring the data
  • Deciding on the types of scores to analyze
  • Inputting data
  • Cleaning the data

Scoring the Data

Scoring the data involves the researching assigning a numerical value to each response on an instrument. This includes categorical and continuous variables. Below is an example

Gender: Male(1)____________ Female(2)___________

I think school is boring

  1. Strongly Agree
  2. Agree
  3. Neutral
  4. Disagree
  5. Strongly Disagree

In the example of above, the first item about gender has the value 1 for male and 2 for female. The second item asks the person’s perception of school from 1 being strongly agree all the way to 5 which indicates strongly disagree. Every response was given a numerical value and it is the number that is inputted into the computer for analysis

Determining the Types of scores to Analyze

Once data has been received, it is necessary to determine what types of scores to analyze. Single-item score involves assessing the results from how each individual person responded. An example would be voting, in voting each individual score is add up to determine the results.

Another approach is summed scores. In this approach, the results of several items are added together. This is done because one item alone does not fully capture whatever is being measured. For example, there are many different instruments that measure depression. Several questions are asked and then the sum of the scores indicates the level of depression the individual is experiencing. No single question can accurately measure a person’s depression so a summed score approach is often much better.

Difference scores can involve single-item or summed scores. The difference is that difference scores measure change over time. For example, a teacher might measure a student’s reading comprehension before and after teaching the students basic skills. The difference is then calculated as below

  • Score 2 – Score 1 = Difference

Inputting Data

Inputting data often happens in Microsoft Excel since it is easy to load an excel file into various statistical programs. In general, inputting data involves giving each item its own column. In this column, you put the respondent’s responses. Each row belongs to one respondent. For example Row 2 would refer to respondent 2. All the results for respondent 2 would be in this row for all the items on the instrument.

If you are summing scores are looking for differences, you would need to create a column to hold the results of the summation or difference calculation. Often this is done in the statistical program and not Microsoft Excel.

Cleaning Data

Cleaning data involves searching for scores that are outside the range of the scale of an item(s) and dealing with missing data. Out range scores can be found through a visual inspection or through running some descriptive statistics. For example, if you have a Lickert scale of 1-5 and one item has a standard deviation of 7 it is an indication that something is wrong because the standard deviation cannot be larger than the range.

Missing data are items that do not have a response. Depending on the type of analysis this can be a major problem. There are several ways o deal with missing data.

  • Listwise deletion is the removal of any respondent who missed even one item on an instrument
  • Mean imputation is the inputting of the mean of the variable wherever there is a missing response

There are other more complicated approaches but this provides some idea of what to do.

Conclusion

Preparing data involves planning what you will do. You need to consider how you will score the items, what type of score you will analyze, input the data, and how you will clean it. From here, a deeper analysis is possible.

Test Validity

Advertisements

Validity is often seen as a close companion of reliability. Validity is the assessment of the evidence that indicates that an instrument is measuring what it claims to measure. An instrument can be highly reliable (consistent in measuring something) yet lack validity. For example, an instrument may reliably measure motivation but not valid in measuring income. The problem is that an instrument that measures motivation would not measure income appropriately.

In general, there are several ways to measure validity, which includes the following.

  • Content validity
  • Response process validity
  • Criterion-related evidence of validity
  • Consequence testing validity
  • Face validity

Content Validity

Content validity is perhaps the easiest way to assess validity. In this approach, the instrument is given to several experts who assess the appropriateness or validity of the instrument. Based on their feedback, a determination of the validity is determined.

Response Process Validity

In this approach, the respondents to an instrument are interviewed to see if they considered the instrument to be valid. Another approach is to compare the responses of different respondents for the same items on the instrument. High validity is determined by the consistency of the responses among the respondents.

Criterion-Related Evidence of Validity

This form of validity involves measuring the same variable with two different instruments. The instrument can be administered over time (predictive validity) or simultaneously (concurrent validity). The results are then analyzed by finding the correlation between the two instruments. The stronger the correlation implies the stronger validity of both instruments.

Consequence Testing Validity

This form of validity looks at what happened to the environment after an instrument was administered. An example of this would be improved learning due to test. Since the the students are studying harder it can be inferred that this is due to the test they just experienced.

Face Validity

Face validity is the perception that the students have that a test measures what it is supposed to measure. This form of validity cannot be tested empirically. However, it should not be ignored. Students may dislike assessment but they know if a test is testing what the teacher tried to teach them.

Conclusion 

Validity plays an important role in the development of instruments in quantitative research. Which form of validity to use to assess the instrument depends on the researcher and the context that he or she is facing.

Logical Flow in R: If/Else Statements Part II

Advertisements

In a previous post, we looked at If/Else statements in R. We developed a function that calculated the amount of money James and the owner would give based on how many points the team scored. Below is a copy of this function.

CashDonate <- function(points, Dollars_per_point=40, HomeGame=TRUE){
 game.points<- points * Dollars_per_point  if(points > 100) {game.points <-points * 30}
 if(HomeGame) {Total.Donation <- game.points * 1.5
 } else {Total.Donation <- game.points * 1.3}
 round(Total.Donation)
}

There is one small problem with this function. Currently, you have to input each game one at a time. You can have R calculate the results of several games at once. For example, look at the results of the code below when we try to input more than one game at once.

> CashDonate(c(99,100,78))
[1] 5940 6000 4680
Warning message:
In if (points > 100) { :
  the condition has length > 1 and only the first element will be used

As you can see, we get a warning message and some of the values are wrong. For example, the second value should be 4,500 and not 6,000.

In order to deal with this problem, R has the ‘ifelse’ function available. The ‘ifelse’ allows R to choose values in two or more vectors to complete an action. We need to be able to choose the appropriate action based on the following information

  • points scored is less than or greater than 100
  • Home game or not a home game

Remember, R could do this if one value was put into the ‘CashDonate’ function. Now we need to be able to calculate what to do based on several values in each of the vectors above. Below is the modified code for doing this.

CashDonate <- function(points, HomeGame=TRUE){
 JamesDonate<- points * ifelse(points > 100, 30, 40)
 totalDonation<- JamesDonate * ifelse(HomeGame, 1.5, 1.3)
 round(totalDonation)
}

Here is what the modified function does

  1. It has the argument ‘points’ and the default argument of ‘HomeGame = TRUE’
  2. The first calculation is the number of points but with the ‘ifelse’ function. If the number of points is greater than 100 the points is multiplied by 30 if less than 100 than the points are multiplied by 40. All this is put in the variable ‘JamesDonate’
  3. Next, the amount from ‘JamesDonate’ is multiplied by 1.5 if it was a home game or 1.3 if it was not a home game. All this is put into the variable ‘totalDonation’
  4. The results are rounded

To use CashDonate to its full potential you need to make a dataframe. Below is the code for the ‘games’ data frame we will use.

games<- data.frame(game.points=c(88,100,99,111,96), HomeGame=c(TRUE, FALSE, FALSE, TRUE, FALSE))

In the ‘games’ data frame we have two columns, one for game points and another that tells us if it was a home game or not. Now we will use the ‘games’ data frame with the new ‘CashDonate’ and calculate the results. We need to use the ‘with’ function to do this. This function will be explained at a later date. Below are the results.

> with(games, CashDonate(game.points, HomeGame=HomeGame))
[1] 5280 5200 5148 4995 4992

You can calculate this manually if you would like. Now, we can calculate more than one value in are ‘CashDonate’ function which makes it much more useful than before. All thanks to the use of the ‘ifelse’ function in the code.

Assessing Reliability

Advertisements

In quantitative research, reliability measures an instruments stability and consistency. In simpler terms, reliability is how well an instrument is able to measure something repeatedly. There are several factors that can influence reliability. Some of the factors include unclear questions/statements, poor test administration procedures, and even the participants in the study.

In this post, we will look at different ways that a researcher can assess the reliability of an instrument. In particular, we will look at the following ways of measuring reliability…

  • Test-retest reliability
  • Alternative forms reliability
  • Kuder-Richardson Split Half Test
  • Coefficient Alpha

Test-Retest Reliability

Test-retest reliability assesses the reliability of an instrument by comparing results from several samples over time. A researcher will administer the instrument at two different times to the same participants. The researcher then analyzes the data and looks for a correlation between the results of the two different administrations of the instrument. in general, a correlation above about 0.6 is considered evidence of reasonable reliability of an instrument.

One major drawback of this approach is that often given the same instrument to the same people a second time influences the results of the second administration. It is important that a researcher is aware of this as it indicates that test-retest reliability is not foolproof.

Alternative Forms Reliability 

Alternative forms reliability involves the use of two different instruments that measure the same thing. The two different instruments are given to the same sample. The data from the two instruments are analyzed by calculating the correlation between them. Again, a correlation around 0.6 or higher is considered as an indication of reliability.

The major problem with this is that it is difficult to find two instruments that really measure the same thing. Often scales may claim to measure the same concept but they may both have different operational definitions of the concept.

Kuder-Richardson Split Half Test

The Kuder-Richardson test involves the reliability of categorical variables. In this approach, an instrument is cut in half and the correlation is found between the two halves of the instrument. This approach looks at internal consistency of the items of an instrument.

Coefficient Alpha

Another approach that looks at internal consistency is the Coefficient Alpha. This approach involves administering an instrument and analyze the Cronbach Alpha. Most statistical programs can calculate this number. Normally, scores above 0.7 indicate adequate reliability. The coefficient alpha can only be used for continuous variables like Lickert scales

Conclusion

Assessing reliability is important when conducting research. The approaches discussed here are among the most common. Which approach is best depends on the circumstances of the study that is being conducted.

Reasons for Testing

Advertisements

Testing is done for many different reasons in various fields such as education,  business, and even government. There are many motivations that people have for using evaluation. In this post, we will look at four reasons that testing is done. The five reasons are…

  • For placement
  • For diagnoses
  • For assessing progress
  • For determining proficiency
  • For providing evidence of competency

For Placement

Placement test serves the purpose of determining at what level a student should be placed. There are often given at the beginning of a student’s learning experience at an institution, often before taking any classes. Normally, the test will consist of specific subject knowledge that a student needs to know in order to have success at a certain level.

For Diagnoses

Diagnostic tests are for identifying weaknesses or learning problems. There similar to a doctor looking over a patient and trying to diagnose the patient’s health problem. Diagnostic test help in identifying gaps in knowledge and help a teacher to know what they need to do to help their students.

For Assessing Progress

Progress tests are used to assess how the students are doing in comparison to the goals and objectives of the curriculum.  At the university level, these are the mid-terms and final exams that students take. How well the students are able to achieve the objects of the course is measured by progress test.

For Determining Proficiency 

Testing for proficiency provides a snapshot of the student is able to do right now. They do not provide a sign of weaknesses like diagnoses nor do they assess progress in comparison to a curriculum like progress test. Common examples of this type of test are used to determine admission into a program such as the SAT, MCAT, or GRE.

For Providing Evidence of Proficiency 

Sometimes, people are not satisfied with traditional means of evaluation. For them, they want to see what the student can do by having the student through examining the student’s performance over several assignments over the course of a semester. This form of assessment provides a way of having students produce work that demonstrates improvement in the classroom.

One of the most common forms of assessment that provides evidence of proficiency is the portfolio. In this approach, the students collect assignments that they have done over the course of the semester to submit. The teacher is able to see how the progress as he sees the students’ improvement over time. Such evidence is harder to track through using tests.

Conclusions

How to assess is best left for the teacher to decide. However, teachers need options that they can use when determining how to assess their students. The examples provided here give teachers ideas on what can assessment they can use in various situations.

Logical Flow in R: If/Else Statements

Advertisements

If statements in R are used to define choice in the script. For example, if there are two choices ‘a’ and ‘b’ and an if statement is used. Then if a certain condition exists ‘a’ happens if not, ‘b’ happens.

Before we explore this more closely we need to setup a scenario and a function for it. Imagine that James wants to donate $40.00 for every point his team scores in a game. To calculate this we create the following function.

CashDonate <- function(points, Dollars_per_point=40){
 game.points<- points * Dollars_per_point
 round(game.points)
}

Here is what we did

  1. We created the function “CashDonate”
  2. The function has the arguments ‘points’ and ‘Dollars_per_point’ which has a default value of 40
  3. The variable ‘game.points’ is created to hold the value of ‘points’ times ‘Dollars_per_point’
  4. The output of ‘game.points’ is then rounded which is the total amount of money that should be donated

The function works perfectly

Below is an example of the function working when James’ team scores 95 points

> CashDonate(95)
[1] 3800

Later, James comes to you upset. His team is scoring so many points that it is starting to affect his budget. He now wants to change the amount he donates IF the team scores over 100 points. If his team scores 100 points or more James wants to donate $30.00 per point instead of $40.00 dollars. Below is the modified function. Notice the use of the if in the script.

CashDonate <- function(points, Dollars_per_point=40){
 game.points<- points * Dollars_per_point  if(points > 100) {game.points <-points * 30
 }
 round(game.points)
}

Most of the code is the same except notice the following changes

  1. We added the argument ‘if’ after this, we put the condition
  2. If the number of points was greater than 100 we now would multiply the results of the variable ‘games.points’ by 30 instead of the default value of 40

Below are two examples. Example 1 shows the results of the modified ‘CashDonate’ function when less than 100 points are scored. Example two will show the results when more than 100 points are scored.

EXAMPLE 1
> CashDonate(98)
[1] 3920
EXAMPLE 2
> CashDonate(103)
[1] 3090

Else Statements

Else statements allow for the use of TRUE/FALSE statements. For example, if ‘a’ is true do this if not do something else. Below is a scenario that requires the use of an ‘else’ statement.

The owner of James’ team decides that he wants to contribute to the donating as well. He states that if it is a home game he will give 50% of whatever James give and he will give 30% of whatever James gives for away games. The total will be added together as one amount. Below is the modified code.

CashDonate <- function(points, Dollars_per_point=40, HomeGame=TRUE){
 game.points<- points * Dollars_per_point  if(points > 100) {game.points <-points * 30}
 if(HomeGame) {Total.Donation <- game.points * 1.5
 } else {Total.Donation <- game.points * 1.3}
 round(Total.Donation)
}

Here is an explanation

  1. At the top, we add the argument “HomeGame” and set the default to “TRUE” which means the other choice is “FALSE” which is for an away game
  2. The rule for points over 100 is the same as before
  3. The second ‘if’ statement talks about the logical statement for “HomeGame”. If it is a home game the results of ‘game.points’ is multiplied by 1.5 or 50%. If it is not a home game (notice the ‘else’) then the results of ‘game.points’ is multiplied by 1.3 or 30%. The results of either choice are stored in the variable ‘Total.Donation’
  4. Lastly, the results of ‘Total.Donation’ are rounded

Below are 4 examples

Example 1 is less than 100 points scored in a home game

> CashDonate(98)
[1] 5880

Example 2 is more than 100 points scored in a home game

> CashDonate(102)
[1] 4590

Example 3 is less than 100 points scored at an away game

> CashDonate(99, HomeGame = FALSE)
[1] 5148

Example 4 is more than 100 points scored at an away game

> CashDonate(104, HomeGame = FALSE)
[1] 4056

Conclusion

If statements provide choice. Else statements provide choice for logical arguments. Both can be used in R to provide several different actions in a script.

Measuring Variables

Advertisements

When conducting quantitative research, one of the earliest things a researcher does is determine what their variables are. This involves developing an operational definition of the variable which description of how you define the variable as well as how you intend to measure it.

After developing an operational definition of the variable(s) of a study, it is now necessary to measure the variable in a way that is consistent with the operational definition. In general, there are five forms of measurement and they are…

  • Performance measures
  • Attitudinal measures
  • Behavioral observation
  • Factual Information
  • Web-based data collection

All forms of measurement involve an instrument which is a tool for actually recording what is measured.

Performance Measures

Performance measures assess a person’s ability to do something. Examples of instruments of this type include an aptitude test, intelligence test, or a rubric for assessing an essay. Often these form of measurement leads to “norms” that serves as a criterion for the progress of students.

Attitudinal Measures

Attitudinal measures assess peoples’ perception They are commonly associated with Lickert Scales (strongly disagree to strongly agree). This form of measurement allows a research access to the attitudes of hundreds instead of the attitudes of few as would be found in qualitative research.

Behavioral Observation

Behavioral observation is the observation of behaviors of interest to the researcher. The instrument involved is normally some sort of checklist. When the behavior is seen it is notated using tick marks.

Factual Information

Data that has already been collected and is available to the public is often called factual information.  The researcher takes this information and analyzes it to answer their questions.

Web-Based Data Collection

Surveys or interviews conducted over the internet are examples of web-based data collection. This is still relatively new. There are still people who question this approach as there are concerns over the representativeness of the sample.

Which Measure Should I Choose?

There are several guidelines to keep in mind when deciding how to measure variables.

  • What form of measurement are you able to complete?  Your personal expertise, as well as the context of your study, affected what you are able to do. Naturally, you want to avoid doing publication quality research with a measurement form you are unfamiliar with or do research in an uncooperative place.
  • What are your research questions? Research questions shape the entire study. A close look at research questions should reveal the most appropriate form of measurement.

The actual analysis of the data depends on the research questions. As such, almost any statistical technique can be applied for all of the forms of measurement. The only limitation is what the researcher wants to know.

Conclusion

Measuring variables is the heart of quantitative research. The approach taken depends on the skills of the researcher as well as the research questions. Ever form of measurement has its place when conducting research.

Tips for Lesson Planning: Part II

Advertisements

Before developing a plan of instruction there are many factors to consider. This post will consider the following points…

  • Needs assessment
  • Syllabus
  • Outlining purpose

Needs Assessment

Before committing to any particular plan of instruction, a teacher must determine what the needs of the students are. This is most frequently done through conducting a needs assessment.

There are many ways to find out what the students need to know. One way is through speaking with the students. This provides some idea as to what their interests are. Student interest can be solicited through conversation, interviews, questionnaire, etc. Another way is to consult the subject matter of the course through examining other curricula related to the subject.

As an educator, it is necessary to balance the needs of the students with the requirements of the course. Many things are modifiable in a course but some things are not. Therefore, keeping in mind the demands of students and the curriculum are important.

Developing the Syllabus

Once the instructor as an idea of the students needs it is time to develop the syllabus of the class. There are several different types of syllabus. A skill syllabus focuses on specific skills students need in the discipline. For example, an ESL syllabus may focus on grammar. Skill syllabus focus on passive skills not active

A functional syllabus is focused on several different actions. Going back to ESL. If a syllabus is focused on inviting, apologizing or doing something else it is a functional syllabus. These skills are active.

A situational syllabus is one in which learning takes place in various scenarios. In ESL, a student might learn English that they would use at the market, in the bank, at school, etc. The focus is on experiential/authentic learning.

The type of syllabus developed is based on the needs of the students. This is important to remember as many teachers predetermine this aspect of the learning experience.

Outlining Purpose

Developing aims and goals have been discussed in a previous post. In short, aims lead to goals, which lead to objectives, which is necessary can lead to indicators. The difference between each type is the amount of detail involved. Aims are the broadest and may apply across an entire school or department while indicators are the most detailed and apply maybe only to a specific assignment.

A unique concept for this post is the development of personal aims. Personal aims are opportunities for the teacher to try something new or improve an aspect of their teaching. For example, if a teacher has never used blogs in the classroom he/she might make a personal aim to use blogs in their classroom. Personal aims allow for reflection which is critical to teacher development.

Conclusion

Lesson planning begins with understanding what the students need. From there, it is necessary to decide what type of syllabus you will make. Lastly, the teacher needs to decide on the various information required such as goals and objectives. Keep in mind that many schools have a specific format for their syllabus. In so, a teacher can keep the concepts of this post in mind even if the structure of the syllabus is already determined.

Tips for Lesson Planning: Part I

Advertisements

Developing lesson plans is a core component of teaching. However, there is a multitude of ways to approach this process. This post will provide some basic ideas on approaching the development of lesson plans by sharing thoughts on the following…

  • The paradox of planning lessons
  • The continuum of planning
  • Using plans in class

The Lesson Plan Paradox

A paradox is a statement that contradicts itself. An example would be jumbo shrimp. We think of shrimp as something that is small so for something to be really big or jumbo and small at the same time usually does not make sense.

Within education, the lesson plan paradox is the idea that a teacher can plan all aspects of a lesson in advance without knowing what will happen in the moment while teaching in their classroom.  Many people believe that there is an interaction that happens while teaching that cannot be anticipated when developing lesson plans.

The Continuum of Planning

In general, the amount of planning needed depends on the skill level of the teacher. Experienced teachers need to plan much less as they have already taught the various concepts before and know where they are going. Inexperienced teachers need to plan much more as they are new to the teaching endeavor.

Experience means experience teaching a particular subject and not only the years of teaching. For example, an excellent algebra teacher would not need formal lesson plans for algebra but may need to plan more carefully if they are asked to teach statistics or some other math subject. Even though they know the subject, the lack of experience teaching it makes it necessary to plan more carefully.

Planning can go from no planning at all to planning every step. Jungle path lesson planning is the extreme of no planning. In this approach, an experienced teacher shows up to class with nothing and see where the journey takes them. Doing this occasionally may break the monotony of studying but continuous use will lead students to think that the teacher is unprepared.

At the opposite extreme are the formal lesson plans developed by student teachers. These lesson plans include everything objectives, materials, procedures, openers, closers, etc. Some even required teachers to indicate how much time every step will take.

Somewhere in the middle is where most teachers are. Uncomfortable with no planning yet too indifferent to planning to plan every minutia of the learning experience like a beginner.

Using Plans in Class

This leads to the question of knowing how thoroughly to apply lesson plans in class. There are several reasons to divert from a lesson plan. One, teaching moments are those opportunities where something happens in or out of class that allows for spontaneous learning. For example, a health teacher may divert from their lesson plan to talk about how cancer works because the students know of a teacher who has cancer.

A second reason to divert from a lesson plan is due to an unforeseen problem. For example, the computer crashes barring access to the internet. This would lead a teacher to find a different way to teach a lesson.

Lastly, a lesson plan can be ignored if the teacher notices that the students need reteaching of skill as they are struggling with it. For example, an English teacher is trying to teach students how to write paragraphs when he or she can tell the students still do not understand how to develop sentences.

Conclusion

Everyone has their own style of lesson planning. It is important to develop an approach while being open to incorporating new ways of planning. The ideas suggested here can help to broaden a teacher’s approach to planning lessons.

Working with Quiet ESL Students

Advertisements

In many classes, there are always one or more students who are quiet and do not want to speak in class. This is often even more common in an ESL class, which adds the additional challenge for students speaking in a different language. Despite these challenges, there are several ways that a teacher can deal with this. Among them includes the following…

  • Preparation time
  • Repetition
  • Adjust group size
  • Understanding the role of the teacher

Preparation Time

Speaking spontaneously is difficult for many ESL students. Part of the challenge is not having time think about what they want to say and determining how to say it in English. Sometimes quiet students will share if they have time to think in advanced.  This can happen through the teacher prompting the students and giving them time to formulate an answer.

A teacher can ask the students any question, such as, “What is the best mall in town?”. Give the students a few minute to think and then call on students. Perhaps not all will share but the preparation time greatly reduces stress.

Repetition

Repeating the same speaking experience often helps students to improve. This depends greatly on whether they receive feedback each time. Students also need to reflect upon how they did themselves. The combination of feedback and reflection often leads to improvement.

For example, if we ask the students about the “best mall in town” students share their answer and are given feedback from peers as well as a chance to think about their performance. The next day the teacher could ask this question again allowing the students to demonstrate how they have improved.

Adjust Group Size

Often, students do not share because of shyness. Speaking in front of the whole class is scary even for some adults. To deal with this, the teacher needs to try different groups sizes. Some students will never speak in a group but will speak if they are paired with someone. The reason being it is really hard to hide when a quiet student is a partner.

Understanding the Role of the Teacher

The teacher, in addition to applying strategies such as the ones above, also can provide support to quiet students. For example, the teacher can encourage the students to speak by providing suggestions for what the students may want to say.

Another approach would be to have the teacher participate in the discussion. Through this, the teacher can guide the discussion. The downside to this is that it is easy for the teacher to take over and dominate the discussion.

Conclusion

Quiet students in ESL classrooms have many reasons for choosing not to share. Whatever the case, there are ways and strategies to deal with this. It is up to the teacher to find ways that are appropriate for their students.

Developing Functions in R Part III: Using Functions as Arguments

Advertisements

Previously, we learned how to add nameless arguments to a function using ellipses ‘. . .’.  In this post, we will learn how to use functions as arguments in functions. An argument is the information found within parentheses ( ) or braces { } in R programming. The reason for doing this is that it allows for many shortcuts in coding. Instead of having to retype the formula for something you can pass the function as an argument in order to save a lot of time.

Below is the code that we have been working with for awhile before we add a function as an argument.

Percent_Divided <- function(x, divide = 2, ...) {
 ToPercent <- round(x/divide, ...)
 Output <- paste(ToPercent, "%", sep = "")
 return(Output)
 }

As a reminder, the ‘Percent_Divided’ function takes a number or variable ‘x’ divides it by two as a default and adds a ‘ % ‘ sign after number(s). With the ‘. . .’ you can pass other arguments such as ‘digits’ and specify how many number you want after the decimal.  Below is an example of the ‘Percent_Divided’ function in action for a variable called ‘B’ and with the add argument of ‘digits =3’

> B
[1] 23.35345 45.56456 32.12131
> Percent_Divided(B, digits = 3)
[1] "11.677%" "22.782%" "16.061%"

Functions as Arguments

We will now make a function that has a function for an argument. We will set a default function but remember that anything can be passed for the function argument. Below is the code for one way to do this.

Percent_Divided <- function(x, divide = 2,FUN_ARG = round, ...) {
 ToPercent <- FUN_ARG(x/divide, ...)
 Output <- paste(ToPercent, "%", sep = "")
 return(Output)
}

Here is an explanation

  1. Most of this script is the same. The main difference is that we added the argument ‘FUN_ARG’ to the first line of the script. This is the place where we can insert whatever function we want. The default function is ’round’. If we do not specify any function ’round’ will be used.
  2. In the second line of the code you again see ‘FUN_ARG’ this function will be activated after ‘x’ is divided by 2 and whatever arguments are used with the ‘. . .’
  3. The rest of the code has already been explained and has not been changed
  4. Important note. If we do not change the default of ‘FUN_ARG’, which is the ’round’ function, we will keep getting the same answers as always. The ‘FUN_ARG’ is only interesting if we do not use the default.

Below is an example of our modified function. The function we are going to pass through the ‘FUN_ARG’ argument is ‘signif’. ‘signif’ sounds the values in its first argument to the specified number of significant digits. We will also pass the argument ‘digits = 3’ through the ellipses ‘. . .’ The values of variable B (see above) will be used for the function

> Percent_Divided(B, FUN_ARG = signif, digits = 3)
[1] "11.7%" "22.8%" "16.1%"

Here is what happen

  1. The Percent_Divided’ function was run
  2. The function ‘signif’ was passed through the ‘FUN_ARG’ argument and the argument ‘digits = 3’ was passed through the ellipses ‘. . .’ argument.
  3. All the values for B were transformed based on the script in the ‘Percent_Divided’ function

Conclusion

Using functions as argument is mostly for saving time when developing a code. Even though this seems complicated this is actually rudimentary programming.

Developing Functions in R Part II: Adding Arguments

Advertisements

In this post, we will continue the discussion on working with functions in R. Functions serve the purpose of programming R to execute several operations at once. This post, we will look at adding additional arguments to a function.

Arguments are the various entries within the parentheses. For example, in are example below the arguments of the function is x.

  • MakePercent <- function(x) {
     ToPercent <- round(x, digits = 2)
     Output <- paste(ToPercent, "%", sep = "")
     return(Output)
    }

In the example above there are many other arguments beside x. However, the only argument for the function is x. The other arguments in other parentheses belong to  other objects in the script. In this post, we are going to learn how to add additional arguments to the function.

Let’s say that we want to convert a number to a percentage like a previous function we made but we now want to be able to divide the number by whatever we want. Here is how it could be done.

Percent_Divided <- function(x, divide) {
 ToPercent <- round(x/divide, digits = 2)
 Output <- paste(ToPercent, "%", sep = "")
 return(Output)
}

Here is what we did

  1. We created the object ‘Percent_Divided’ and assigned the function with the arguments ‘x’ and ‘divide’
  2. Next we use a { and we create the variable ‘ToPercent’ and we assigned the function ’round’  to round ‘x’ divided by whatever value ‘divide’ from the function takes. We then round the results of this two digits.
  3. The results of ‘ToPercent’ are then assigned to the variable ‘Output’ where a ‘ %’ sign is assigned to the value
  4. Lastly, the results of ‘Output’ are printed in the console.

Sounds simple. Below is the function in action dividing a number by 2 and then by 3

> source('~/.active-rstudio-document', echo=TRUE)

> Percent_Divided <- function(x, divide) {
+         ToPercent <- round(x/divide, digits = 2)
+         Output <- paste(ToPercent, "%", sep = "")
+    .... [TRUNCATED] 
> Percent_Divided(22.12234566, divide=2)
[1] "11.06%"
> Percent_Divided(22.12234566, divide=3)
[1] "7.37%"

Here is what happen

  1. You source the script from the source editor by typing ctrl + shift + enter
  2. Next, I used the function ‘Percent_Divided’ with the number 22.12234566 and I decided to divide the number by two
  3. R returns the answer 11.06%
  4. Next I repeat the process but I divide by 3 this time
  5. R returns the answer 7.37%

There is one problem. The argument ‘divide’ has no default  value. What this means is that you have to tell R what the value of ‘divide’ is every single time. As an example see below

> Percent_Divided(22.12234566)
Error in Percent_Divided(22.12234566) : 
  argument "divide" is missing, with no default

Because I did not tell R what value ‘divide’ would be, R was not able to complete the process of the function. To solve this problem we will set the default value of ‘divide’ to 10 in the script as shown below.

Percent_Divided <- function(x, divide = 10) {
 ToPercent <- round(x/divide, digits = 2)
 Output <- paste(ToPercent, "%", sep = "")
 return(Output)
}

If you look closely you will see ‘divide = 10’. This is the default value for ‘divide’ if we do not set another number for ‘divide’ R will use 10. Below is an example using the default value of ‘divide’ and another example with ‘divide’ set to 5.

> Percent_Divided <- function(x, divide = 10) {
+         ToPercent <- round(x/divide, digits = 2)
+         Output <- paste(ToPercent, "%", sep = "") .... [TRUNCATED] 
> Percent_Divided(22.12234566)
[1] "2.21%"
> Percent_Divided(22.12234566, divide = 5)
[1] "4.42%"

First we sourced the script using ctrl + shift + enter. In the first example, the number is automatically divided by 10 because this is the default. In the second example, we specific we wanted to divide by five by adding the argument ‘divide = 5’. You can see the difference in the results.

In a future post, we will continue to examine the role of arguments in functions.

Reviewing the Literature: Part II

Advertisements

In the last post, we began a discussion on the steps involved in reviewing the literature and we look at the first two steps, which are identifying key terms and locating literature. In this post, we will look at the last three steps of developing a review of literature which are…

3. Evaluate and select literature to include in your review
4. Organize the literature
5. Write the literature review

Evaluating Literature

This step was alluding to when I wrote about using google scholar and google book in part I. For articles, you want to assess the quality of them by determining who publishes the journal. Reputable publishers usually publish respectable journals. This is not to say that other sources of articles are totally useless. The point is that you want to attract as few questions as possible when it comes to the quality of the sources you use to develop a literature review.

One other important concept in evaluating literature is the relevancy of the sources. You want sources that focus on a similar topic, population, and or problems. It is easy for a review of literature to lose focus so this is a critical criteria to consider.

Organizing the Literature 

There are many options for organizing sources. You can make an outline and group the sources together in by heading or you can construct some sort of visual of the information. The place to start is to examine the abstract of the articles that are going to be a part of your literature review. The abstract is a summary of the study and is a way to get an understanding of a study quickly.

If the abstract indicates that a study is beneficial you can look at the whole article to learn more. If the whole article is unavailable you can use the abstract as a potential source.

Writing a Review of Literature

Writing involves taking your outline or visual and convert it into paragraph format. There are at least three common ways to write a literature review. The three ways are thematic review, study-by-study review, and combo review.

The thematic review shares a theme in research and cites several sources. There is very little detail. The cites support the claim made by the theme. Below is an example using APA formatting.

Smoking is bad for you (James, 2013; Smith, 2012; Thomas, 2009)

The details of the studies above are never shared but it is assumed that these studies all support the claim that smoking is bad for you.

Another type of literature review is the study-by-study review. In this approach, a detailed summary is provided of several studies under a larger theme. Consider the example below

Thomas (2009) found in his study among middle class workers that smoking reduces lifespan by five years.

This example provides details about the dangers of smoking as found in one study.

A combo review is a mixture of the first two approaches. Sometimes you provide a thematic review other times you provide the details of a study-by-study review. This is the most common approach as it’s the easiest to read because it provides an overview with an occasional detail.

Conclusion

The ideas presented here are for providing support in writing review of literature. There are many other ways to approach this but the concepts presented here will provide some guidance.

Reviewing the Literature: Part I

Advertisements

The research process often begins with a literature review. A review of literature is a systematic summary of books, journal articles, and other sources pertaining to a particular topic.The purpose of a literature review is to demonstrate how your study adds to the existing literature and also to show why your study is needed.

In general, there are five common steps to reviewing the literature and they are…

  1. Identify key terms
  2. Locate literature
  3. Evaluate and select literature to include in your review
  4. Organize the literature
  5. Write the literature review

In this post, we will discuss the first two

Identify Key Terms

The purpose of identifying key terms is that they give you words to “google” when you conduct a search. Below are some ways to develop key terms.

  • Creating some sort of title, even if it is temporary, and conduct a search based on words in this title is one way to begin.
  • If you already have research questions, you can look for important words in these questions to conduct a search.
  • Find an article that is studying something similar to you and look at the keywords that they include. Many articles have a list of keywords on the first page that can be used for other studies.

Locating Literature

Locating literature is not as difficult as it was years ago thanks to the internet. Now, the search for high-quality sources doesn’t even require the need to leave home. There is some sort of hierarchy in terms of the quality and age of material available and it is as follows. Each example below is rate on a scale of 1-5 for quality and newness the higher the rating the higher the quality and newness of the example

  • Websites, newspapers, and blogs Quality 1 Newness 5
  • Academic publications such as conference papers, theses, Quality 2 Newness 4
  • Peer-reviewed Journal Articles Quality 3 Newness 3
  • Books Quality 4 Newness 2
  • Summaries like encyclopedias Quality 5 Newness 1

In this example, normally the lower the quality the younger the information is. Keep in mind that there are many exceptions to the example above. Self-published books would obviously have a  much lower quality rating while some online sources are of much higher quality because of who is providing the information.

Once you have some keywords it is time to begin the search. Google books is an excellent place to begin. When you get to this website, you type in your key term and Google returns a list of books that contain the key term. You click on the book and it takes you to the page where the term is. This is like holding the book in your hand at the library. You note whatever information you need and go to another book.

For Google scholar, you go to the site and type in your key term. Google Scholar gives you several pages of articles. Before choosing, there are a few guidelines to keep in mind.

  • Depending on your field, you will probably be expected to cite new literature in your review often in the last 5-10 years. To do this you need set a custom range for articles you want to view. Focusing on the last 5-10 actually helps you to focus and gets things done quicker. You only cite older material if it was groundbreaking.
  • Google Scholar gives you any article with concern for quality. To protect yourself from citing poor research one strategy is to consider who the publisher was. Below is a few examples of high-quality publishers of academic journals. If the article was published by them it is probably of decent quality.
    • Sage, JSTOR, Wiley, Elsevier

Conclusion

This provides some basic information on beginning the process. In a later post, we will go over the last few steps of conducting a literature review.