Creating Multiple Choice Quiz Questions in Moodle VIDEO

Making  multiple choice quiz questions in Moodle

Advertisements

Reading Assessment at the Interactive and Extensive Level

In reading assessment, the interactive and extensive level are the highest levels of reading. This post will provide examples of assessments at each of these two levels.

Interactive Level

Reading at this level is focused on both form and meaning of the text with an emphasis on top-down processing. Below are some assessment examples

Cloze

Cloze assessment involves removing certain words from a paragraph and expecting the student to supply them. The criteria for removal is every nth word aka fixed-ratio or removing words with meaning aka rational deletion.

In terms of marking, you have the choice of marking based on the student providing the exact wording or an appropriate wording. The exact wording is strict but consistent will appropriate wording can be subjective.

Read and Answer the Question

This is perhaps the most common form of assessment of reading. The student simply reads a passage and then answer questions such as T/F, multiple choice, or some other format.

Information Transfer

Information transfer involves the students interpreting something. For example, they may be asked to interpret a graph and answer some questions. They may also be asked to elaborate on the graph, make predictions, or explain. Explaining a visual is a common requirement for the IELTS.

Extensive Level

This level involves the highest level of reading. It is strictly top-down and requires the ability to see the “big picture” within a text. Marking at this level is almost always subjective.

Summarize and React

Summarizing and reacting requires the student to be able to read a large amount of information, share the main ideas, and then providing their own opinion on the topic. This is difficult as the student must understand the text to a certain extent and then form an opinion about what they understand.

I like to also have my students write several questions they have about the text This teaches them to identify what they do not know. These questions are then shared in class so that they can be discussed.

For marking purposes, you can provide directions about a number of words, paragraphs, etc. to provide guidance. However, marking at this level of reading is still subjective. The primary purpose of marking should probably be evidence that the student read the text.

Conclusion

The interactive and extensive level of reading is when teaching can become enjoyable. Students have moved beyond just learning to read to reading to learn. This opens up many possibilies in terms of learning experiences.

Reading Assessment at the Perceptual and Selective Level

This post will provide examples of assessments that can be used for reading at the perceptual and selective level.

Perceptual Level

The perceptual level is focused on bottom-up processing of text. Comprehension ability is not critical at this point. Rather, you are just determining if the student can accomplish the mechanical process of reading.

Examples

Reading Aloud-How this works is probably obvious to most teachers. The students read a text out loud in the presence of an assessor.

Picture-Cued-Students are shown a picture. At the bottom of the picture are words. The students read the word and point to a visual example of it in the picture. For example, if the picture has a cat in it. At the bottom of the picture would be the word cat. The student would read the word cat and point to the actual cat in the picture.

This can be extended by using sentences instead of words. For example, if the actual picture shows a man driving a car. There may be a sentence at the bottom of the picture that says “a man is driving a car”. The student would then point to the man in the actual picture who is driving.

Another option is T/F statements. Using our cat example from above. We might write that “There is one cat in the picture” the student would then select T/F.

Other Examples-These includes multiple-choice and written short answer.

Selective Level

The selective level is the next above perceptual. At this level, the student should be able to recognize various aspects of grammar.

Examples

Editing Task-Students are given a reading passage and are asked to fix the grammar. This can happen many different ways. They could be asked to pick the incorrect word in a sentence or to add or remove punctuation.

Pictured-Cued Task-This task appeared at the perceptual level. Now it is more complicated. For example, the students might be required to read statements and label a diagram appropriately, such as the human body or aspects of geography.

Gap-Filling Task-Students read a sentence and complete it appropriately

Other Examples-Includes multiple-choice and matching. The multiple-choice may focus on grammar, vocabulary, etc. Matching attempts to assess a students ability to pair similar items.

Conclusion

Reading assessment can take many forms. The examples here provide ways to deal with this for students who are still highly immature in their reading abilities. As fluency develops more complex measures can be used to determine a students reading capability.

Review of “See How It’s Made”

This is a review of the book See How It’s Made written by Penny Smith and Lorrie Mack.

The Summary

This book takes several everyday products such as ice cream, CDs, t-shirts, crayons, etc. and illustrates the process of how the item is made. The authors take you into the factory where these products are produced and shows you through the use of photographs how each item is made. It can be surprising even for teachers to learn how much work goes into making CDs or apple juice.

The Good

The photo rich environment of the text makes it as realistic as possible. In addition, choosing common everyday items really helps in relevancy for students. Many kids find it interesting to know how pencils and crayons are made. The book is truly engaging at least in a one-on-one situation.

The Bad

The text is small in this book. This would make reading it difficult for younger students. In addition, although I appreciate the photos there are so many jammed onto a single page that it would be difficult to share this book with an entire class. This leaves the book for use only in the class library for individual students. Lastly, kids learn a lot of

Lastly, kids learn a lot of relevant interesting things but there seems to be no overall point to the text. It just a collection of different processes for making things. It is left to the teacher to come up with a reason for reading this

The Recommendation

THis book is 3/5 stars. It’s a great text in terms of the visual stimulus but it can be difficult to read and lacks a sense of direction.

Types of Reading in ESL

Reading for comprehension involves two forms of processing which are bottom-up and top-down. Bottom-up processing involves pulling letters together to make words, words to make sentences, etc. This is most commonly seen as students sounding out words when they read. The goal is primarily to just read the word.

Top-down processing is the use of prior knowledge, usually organized as schemas in the mind to understand what is being read. For example, after a student reads the word “cat” using bottom-up processing they then use top-down processing of what they know about cats such as their appearance, diet, habits, etc.

These two processes work together in order for us to read. Generally, they happen simultaneously as we are frequently reading and using our background knowledge to understand what we are reading.

In the context of reading, there are four types of reading from simplest to most complex and they are

  • Perceptive
  • Selective
  • Interactive
  • Extensive

We will now look at each in detail

Perceptive

Perceptive reading is focused primarily on bottom-processing. In other words, if a teacher is trying to assess this type of reading they simply want to know if the student can read or not. The ability to understand or comprehend the text is not the primary goal at this.

Selective

Selective reading involves looking a reader’s ability to recognize grammar, discourse features, etc. This is done with brief paragraphs and short reading passages. Assessment involves standard assessment items such as multiple-choice, short answer, true/false, etc.

In order to be successful at this level, the student needs to use both bottom-up and top-down processing.  Charts and graphs can also be employed

Interactive

Interactive reading involves deriving meaning from the text. This places even more emphasis on top-down processing. Readings are often chosen from genres that employ implied main ideas rather than stated. The readings are also more authentic in nature and can include announcements, directions, recipes, etc.

Students who lack background knowledge will struggle with this type of reading regardless of their language ability. In addition, inability to think critically will impair performance even if the student can read the text.

Extensive

Extensive is reading large amounts of information and being able to understand the “big picture”. The student needs to be able to separate the details from the main ideas. Many students struggle with this in their native language. As such, this is even more difficult when students are trying to digest large amounts of information in a second language.

Conclusion

Reading is a combination of making sense of the words and using prior knowledge to comprehend text. The levels of reading vary in their difficulty. In order to have success at reading, students need to be exposed to many different experiences in order to have the background knowledge they need that they can call on when reading something new.

Diversity and Lexical Dispersion Analysis in R

In this post, we will learn how to conduct a diversity and lexical dispersion analysis in R. Diversity analysis is a measure of the breadth of an author’s vocabulary in a text. Are provides several calculations of this in their output

Lexical dispersion is used for knowing where or when certain words are used in a text. This is useful for identifying patterns if this is a goal of your data exploration.

We will conduct our two analysis by comparing two famous philosophical texts

  • Analects
  • The Prince

These books are available at the Gutenberg Project. You can go to the site type in the titles and download them to your computer.

We will use the “qdap” package in order to complete the sentiment analysis. Below is some initial code.

library(qdap)

Data Preparation

Below are the steps we need to take to prepare the data

  1. Paste the text files into R
  2. Convert the text files to ASCII format
  3. Convert the ASCII format to data frames
  4. Split the sentences in the data frame
  5. Add a variable that indicates the book name
  6. Combine the two books into one dataframe

We now need to prepare the three text. First, we move them into R using the “paste” function.

analects<-paste(scan(file ="C:/Users/darrin/Documents/R/R working directory/blog/blog/Text/Analects.txt",what='character'),collapse=" ")
prince<-paste(scan(file ="C:/Users/darrin/Documents/R/R working directory/blog/blog/Text/Prince.txt",what='character'),collapse=" ")

We must convert the text files to ASCII format see that R is able to interpret them.

analects<-iconv(analects,"latin1","ASCII","")
prince<-iconv(prince,"latin1","ASCII","")

For each book, we need to make a dataframe. The argument “texts” gives our dataframe one variable called “texts” which contains all the words in each book. Below is the code
data frame

analects<-data.frame(texts=analects)
prince<-data.frame(texts=prince)

With the dataframes completed. We can now split the variable “texts” in each dataframe by sentence. The “sentSplit” function will do this.

analects<-sentSplit(analects,'texts')
prince<-sentSplit(prince,'texts')

Next, we add the variable “book” to each dataframe. What this does is that for each row or sentence in the dataframe the “book” variable will tell you which book the sentence came from. This will be valuable for comparative purposes.

analects$book<-"analects"
prince$book<-"prince"

Now we combine the two books into one dataframe. The data preparation is now complete.

twobooks<-rbind(analects,prince)

Data Analysis

We will begin with the diversity analysis

div<-diversity(twobooks$texts,twobooks$book)
div
           book wc simpson shannon collision berger_parker brillouin
1 analects 30995    0.989   6.106   4.480     0.067         5.944
2 prince   52105    0.989   6.324   4.531     0.059         6.177

For most of the metrics, the diversity in the use of vocabulary is the same despite being different books from different eras in history. How these numbers are calculated is beyond the scope of this post.

Next, we will calculate the lexical dispersion of the two books. Will look at three common themes in history money, war, and marriage.

dispersion_plot(twobooks$texts,grouping.var=twobooks$book,c("money","war",'marriage'))

1

The tick marks show when each word appears. For example, money appears at the beginning of Analects only but is more spread out in tThe PRince. War is evenly dispersed in both books and marriage only appears in The Prince

Conclusion

This analysis showed additional tools that can be used to analyze text in R.

Review of “The Great Wall of China”

In this post, we will look at another book that I have used as a K-12 teacher The Great Wall Of China (Aladdin Picture Books) by Leonard Everett Fisher (pp 31).

The Summary

The title clearly lets you know what the book is about. It focuses on Ch’in Shih Huang Ti and his quest to build a wall that would protect his empire from the Mongols. According to the text, Ch’in Shih Huang Ti was the first supreme emperor of China as he conquered several other small kingdoms to make what we now know as China.

The book depicts how the Mongols were coming and burning down border villages in China and how the Emperor plans and builds the wall. Men were dragged from their families to go and work on the wall. The Emperor even sent his oldest son and crown prince to help build the wall.

The project was a combination of building a new wall while also restoring walls that were in disrepair. Workers who complained or ran away were buried alive. It took a total of ten years to complete what is now called the Great Wall of China.

The Good 

My favorite aspect of the book is the iconic black and white drawings depicting ancient China. The stern look of the Emperor and the soldiers remains of the toughness of characters old western movies. Nobody smiles in the book until the last page when the Emperor is rejoicing over the completion of the Wall.

THe book doesn’t include a lot of text. Rather, the pictures do the majority of the storying telling. The pictures are large enough that you can use this book for a whole-class reading experience where the kids sit around you as you read the text and show them the pictures.

THe author also did an excellent job of simplifying the complexity of the building of the Great Wall into a few pages for young children. For example, there is much more to the Emperor’s son being sent to help build the Great Wall. However, the author reduces this complex problem down to the accusation that the Emperor thought his son was a “whiner.”

The Bad

I can’t say there is bad in this text as it depends on what your purpose is for buying the book. There is not a lot of text in the book as it is primarily picture-based. If you want your students to read on their own there is not a lot to read. For those of us who have a background in Chinese history, the text may be oversimplified.

The Recommendation

I would give this book 4.5/5 stars. Whether for your library or for sharing with your entire class this book will provide a great learning experience about a part of history that is normally not studied as much as it should be.

Assessing Speaking in ESL

In this post, we will look at different activities that can be used to assess a language learner’s speaking ability, Unfortunately, will not go over how to mark or give a grade for the activities we will only provide examples.

Directed Response

In this activity, the teacher tries to have the student use a particular grammatical form by having the student modify something the teacher says. Below is an example.

Teacher: Tell me he went home
Student: He went home

This is obviously not deep. However, the student had to know to remove the words “tell me” from the sentence and they also had to know that they needed to repeat what the teacher said. As such, this is an appropriate form of assessment for beginning students.

Read Aloud

Read aloud is simply having the student read a passage verbatim out loud. Normally, the teacher will assess such things as pronunciation and fluency. There are several problems with this approach. First, reading aloud is not authentic as this is not an in demand skill in today’s workplace. Second, it blends reading with speaking which can be a problem if you do not want to assess both at the same time.

Oral Questionnaires 

Students are expected to respond and or complete sentences. Normally, there is some sort of setting such as a mall, school, or bank that provides the context or pragmatics. below is an example in which a student has to respond to a bank teller. The blank lines indicate where the student would speak.

Teacher (as bank teller): Would you like to open an account?
Student:_______________________
Teacher (as bank teller): How much would you like to deposit?
Student:___________________________

Visual Cues

Visual cues are highly opened. For example, you can give the students a map and ask them to give you directions to a location on the map. In addition, students can describe things in the picture or point to things as you ask them too. You can also ask the students to make inferences about what is happening in a picture. Of course, all of these choices are highly difficult to provide a grade for and may be best suited for formative assessment.

Translation

Translating can be a highly appropriate skill to develop in many contexts. In order to assess this, the teacher provides a word, phrase, or perhaps something more complicated such as directly translating their speech. The student then Takes the input and reproduces it in the second language.

This is tricky to do. For one, it is required to be done on the spot, which is challenging for anybody. In addition, this also requires the teacher to have some mastery of the student’s mother tongue, which for many is not possible.

Other Forms

There are many more examples that cannot be covered here. Examples include interviews, role play, and presentations. However, these are much more common forms of speaking assessment so for most they are already familiar with these.

Conclusion

Speaking assessment is a major component of the ESL teaching experience. The ideas presented here will hopefully provide some additionals ways that this can be done.

Readability and Formality Analysis in R

In this post, we will look at how to assess the readability and formality of a text using R. By readability, we mean the use of a formula that will provide us with the grade level at which the text is roughly written. This is highly useful information in the field of education and even medicine.

Formality provides insights into how the text relates to the reader. The more formal the writing the greater the distance between author and reader. Formal words are nouns, adjectives, prepositions, and articles while informal (contextual) words are pronouns, verbs, adverbs, and interjections.

The F-measure counts and calculates a score of formality based on the proportions of the formal and informal words.

We will conduct our two analysis by comparing two famous philosophical texts

  • Analects
  • The Prince

These books are available at the Gutenberg Project. You can go to the site type in the titles and download them to your computer.

We will use the “qdap” package in order to complete the sentiment analysis. Below is some initial code.

library(qdap)

Data Preparation

Below are the steps we need to take to prepare the data

  1. Paste the text files into R
  2. Convert the text files to ASCII format
  3. Convert the ASCII format to data frames
  4. Split the sentences in the data frame
  5. Add a variable that indicates the book name
  6. Combine the two books into one dataframe

We now need to prepare the two text. The “paste” function will move the text into the R environment.

analects<-paste(scan(file ="C:/Users/darrin/Documents/R/R working directory/blog/blog/Text/Analects.txt",what='character'),collapse=" ")
prince<-paste(scan(file ="C:/Users/darrin/Documents/R/R working directory/blog/blog/Text/Prince.txt",what='character'),collapse=" ")

The text need to be converted to the ASCII format and the code below does this.

analects<-iconv(analects,"latin1","ASCII","")
prince<-iconv(prince,"latin1","ASCII","")

For each book, we need to make a dataframe. The argument “texts” gives our dataframe one variable called “texts” which contains all the words in each book. Below is the code data frame

analects<-data.frame(texts=analects)
prince<-data.frame(texts=prince)

With the dataframes completed. We can now split the variable “texts” in each dataframe by sentence. The “sentSplit” function will do this.

analects<-sentSplit(analects,'texts')
prince<-sentSplit(prince,'texts')

Next, we add the variable “book” to each dataframe. What this does is that for each row or sentence in the dataframe the “book” variable will tell you which book the sentence came from. This will be useful for comparative purposes.

analects$book<-"analects"
prince$book<-"prince"

Lastly, we combine the two books into one dataframe. The data preparation is now complete.

twobooks<-rbind(analects,prince)

Data Analysis

We will begin with the readability. The “automated_readbility_index” function will calculate this for us.

ari<-automated_readability_index(twobooks$texts,twobooks$book)
ari
##       book word.count sentence.count character.count Automated_Readability_Index
## 1 analects      30995           3425          132981                       3.303
## 2   prince      52105           1542          236605                      16.853

Analects is written on a third-grade level but The Prince is written at grade 16. This is equivalent to a Senior in college. As such, The Prince is a challenging book to read.

Next we will calcualte the formality of the two books. The “formality” function is used for this.

form<-formality(twobooks$texts,twobooks$book)
form
##       book word.count formality
## 1   prince      52181     60.02
## 2 analects      31056     58.36

The books are mildly formal. The code below gives you the break down of the word use by percentage.

form$form.prop.by
##       book word.count  noun  adj  prep articles pronoun  verb adverb
## 1 analects      31056 25.05 8.63 14.23     8.49   10.84 22.92   5.86
## 2   prince      52181 21.51 9.89 18.42     7.59   10.69 20.74   5.94
##   interj other
## 1   0.05  3.93
## 2   0.00  5.24

The proportions are consistent when the two books are compared. Below is a visual of the table we just examined.

plot(form)

1.png

Conclusion

Readability and formality are additional text mining tools that can provide insights for Data Scientist. Both of these analysis tools can provide suggestions that may be needed in order to enhance communication or compare different authors and writing styles.

Review of “The Usborne Book of World History”

As a teacher, I have used hundreds of books in my career to instruct and guide students. As such, I decided I would share my thoughts on  some of these books to provide other educators with insights on potential instructional materials,

The Summary

“The Usborne Book of World History” covers, as you can probably guess, the history of the world from the beginning of recorded history to the dawn of the 20th century. Early civilizations such as the Sumerians, Egyptians, and Hittites are covered as well as more recent civilizations such as the Greeks, Romans, British Empire. There are mentions of African ad several Asian civilizations such as the Chinese and Japanese.

The Good

This book is rich with illustrations of all of the historical events and cultural topics. For young readers who are visual learners, this is a superb text for such an experience. In the text, there are illustrations of fighting between the Canaanites and the Philistines, an Assyrian king fighting a lion, life in the city of ancient Athens, and even depictions of settlers moving out west in what would later become the United States. This is completely a visually based learning experience.

The Bad

The focus on illustrations is a strength but also a weakness depending on your goals. The students can get so obsessed with the pictures that they never actually read the text in the book. This can be a problem if you are trying to get your students to develop their reading skills. In addition, The reading level of the text is probably at the 4th-5th-grade level which is beyond younger student.

This book is also not appropriate for a whole class read aloud because several illustrations are crammed onto one page. This would make it challenging for several students to all see what the teacher was talking about at the same time, which could quickly lead to behavioral problems.

Lastly, the book is somewhat detailed oriented in that it provides a bunch of little facts about each Kingdom or historical period. If you want the students to see the big picture you have to trace the themes of history yourself as there seem to be no pedagogical aids beyond the rich illustrations.

The Recommendation

The Usborne Book of World History absolutely deserves 4/5 stars. Buy it and put in your library as an opportunity for your students to “see” history rather than read about it. If you need something for whole-class instruction you better keep looking.

Homeschooling Bilingual Children

A colleague of mine has kids that are half Thai and half African (like Tiger Woods). In the home, both Thai and English are spoken frequently. A major problem with bilingual children is that one of the languages is never truly mastered. This is called semilingualism. The problem was not with the kids learning Thai because their mother was Thai. Instead, my colleague was worried about his kids developing broken poorly understood Pidgin English.

About 8 years ago there was another family whose children were half Thai and half American and they had faced the same problem. However, they overreacted and never spoke Thai in their home in order to make sure their children learned English. This led to the kids knowing only English even though they were half-Thai and lived in Thailand. My friend did not want to make this mistake.

What He Did

I suggested to my friend that he needed to set some sort of schedule in which time was set aside in the home for the use of both languages. Below is the schedule that he developed.

  • Monday – Friday from waking up until 2 pm Thai language
  • Monday-Friday 2 pm to bedtime English language
  • Weekends-English only
  • Exceptions-Home school curriculum is in English with the exception of Thai language

This has worked relatively well. The children are exposed to both languages each day for several hours at a time. Generally, the rule is when dad is home English is used.

To further support the acquisition of English I encouraged my friend to never speak any Thai to his children. This has stunted his development in the language but it’s more important that they learn than him.

For the oldest daughter who is home schooled, Dan and his wife taught her to read and write in Thai and English at the same time. Many language experts would disagree with this and suggest that it is better to learn one language first and to transfer those skills to learning a second language. I see their point but my friend wanted his daughter to have native fluency in both languages to the point that if she is having a dream both languages could be used without a problem so to speak.

Challenges

With bilingual children, all language goals are delayed. This is because the child has to acquire double the vocabulary of a monolingual child. My friend’s daughter didn’t really talk until she was three. However, by five things start to move at a normal pace with some “problems”

  • Word order is sometimes wrong. ie my friend’s daughter will use Thai syntax in English and vice versa.
  • Mixing of the two languages at times (code-switching)

Most kids grow out of this.

Conclusion

Raising bilingual children requires finding a balance between the two languages in the home. I have provided one example but I would like to know how you have dealt with this with your children.

Sentiment Analysis in R

In this post, we will perform a sentiment analysis in R. Sentiment analysis involves employs the use of dictionaries to give each word in a sentence a score. A more positive word is given a higher positive number while a more negative word is given a more negative number. The score is the calculated based on the position of the word, the weight, as well as other more complex factors. This is then performed for the entire corpus to give it a score.

We will do a sentiment analysis in which we will compare three famous philosophical texts

  • Analects
  • The Prince
  • Penesees

These books are available at the Gutenberg Project. You can go to the site type in the titles and download them to your computer.

We will use the “qdap” package in order to complete the sentiment analysis. Below is some initial code.

library(qdap)

Data Preparation

Below are the steps we need to take to prepare the data

  1. Paste the text files into R
  2. Convert the text files to ASCII format
  3. Convert the ASCII format to data frames
  4. Split the sentences in the data frame
  5. Add a variable that indicates the book name
  6. Combine the three books into one dataframe

We now need to prepare the three text. First, we move them into R using the “paste” function.

analects<-paste(scan(file ="C:/Users/darrin/Documents/R/R working directory/blog/blog/Text/Analects.txt",what='character'),collapse=" ")
pensees<-paste(scan(file ="C:/Users/darrin/Documents/R/R working directory/blog/blog/Text/Pascal.txt",what='character'),collapse=" ")
prince<-paste(scan(file ="C:/Users/darrin/Documents/R/R working directory/blog/blog/Text/Prince.txt",what='character'),collapse=" ")

We need to convert the text files to ASCII format see that R is able to read them.

analects<-iconv(analects,"latin1","ASCII","")
pensees<-iconv(pensees,"latin1","ASCII","")
prince<-iconv(prince,"latin1","ASCII","")

Now we make our dataframe for each book. The argument “texts” gives our dataframe one variable called “texts” which contains all the words in each book. Below is the code data frame

analects<-data.frame(texts=analects)
pensees<-data.frame(texts=pensees)
prince<-data.frame(texts=prince)

With the dataframes completed. We can now split the variable “texts” in each dataframe by sentence. We will use the “sentSplit” function to do this.

analects<-sentSplit(analects,'texts')
pensees<-sentSplit(pensees,'texts')
prince<-sentSplit(prince,'texts')

Next, we add the variable “book” to each dataframe. What this does is that for each row or sentence in the dataframe the “book” variable will tell you which book the sentence came from. This will be valuable for comparative purposes.

analects$book<-"analects"
pensees$book<-"pensees"
prince$book<-"prince"

Now we combine all three books into one dataframe. The data preparation is now complete.

threebooks<-rbind(analects,pensees,prince)

Data Analysis

We are now ready to perform the actual sentiment analysis. We will use the “polarity” function for this. Inside the function, we need to use the text and the book variables. Below is the code. polarity analysis

pol<-polarity(threebooks$texts,threebooks$book)

We can see the results and a plot in the code below.

pol
##       book total.sentences total.words ave.polarity sd.polarity stan.mean.polarity
## 1 analects            3425       31383        0.076       0.254              0.299
## 2  pensees            7617      101043        0.008       0.278              0.028
## 3   prince            1542       52281        0.017       0.296              0.056

The table is mostly self-explanatory. We have the total number of sentences and words in the first two columns. Next is the average polarity and the standard deviation. Lastly, we have the standardized mean. The last column is commonly used for comparison purposes. As such, it appears that Analects is the most positive book by a large margin with Pensees and Prince be about the same and generally neutral.

plot(pol)

1.png

The top plot shows the polarity of each sentence over time or through the book. The bluer the more negative and the redder the more positive the sentence. The second plot shows the dispersion of the polarity.

There are many things to interpret from the second plot. For example, Pensees is more dispersed than the other two books in terms of polarity. The Prince is much less dispersed in comparison to the other books.

Another interesting task is to find the most negative and positive sentence. We need to take information from the “pol” dataframe and then use the “which.min” function to find the lowest scoring. The “which.min” function only gives the row. Therefore, we need to take this information and use it to find the actual sentence and the book. Below is the code.

pol.df<-pol$all #take polarity scores from pol.df
which.min(pol.df$polarity) #find the lowest scored sentence
## [1] 6343
pol.df$text.var[6343] #find the actual sentence
## [1] "Apart from Him there is but vice, misery, darkness, death, despair."
pol.df$book[6343] #find the actual book name
## [1] "pensees"

Pensees had the most negative sentence. You can see for yourself the clearly negative words which are vice, misery, darkness, death, and despair. We can repeat this for the most positive sentence

which.max(pol.df$polarity)
## [1] 4839
pol.df$text.var[4839]
## [1] "You will be faithful, honest, humble, grateful, generous, a sincere friend, truthful."
pol.df$book[4839]
## [1] "pensees"

Again Pensees has the most positive sentence with such words as faithful, honest, humble, grateful, generous, sincere, friend, truthful all being positive.

Conclusion

Sentiment analysis allows for the efficient analysis of a large body of text in a highly qualitative manner. There are weaknesses to this approach such as the dictionary used to classify the words can affect the results. In addition, Sentiment analysis only looks at individual sentences and not larger contextual circumstances such as a paragraph. As such, a sentiment analysis provides descriptive insights and not generalizations.

Struggles with Early Childhood Education

I had a friend (Dan) share his experience with me of home schooling his oldest daughter (Jina) and the challenges he faced as he tried to start her education too early in his opinion. He began homeschooling his oldest daughter when she was about four years of age. His goals for the 1st year was simply for his daughter

  • to learn to count to 10
  • to recognize the letters of the alphabet

That was all he wanted for the first year of instruction. Dan friend knew Jina was young, perhaps too young, so he did not want to push it. He just wanted to develop a rhythm of learning and instruction in the family along with the two goals above. In addition, his family was one of only two families who home school their kids in his community and he wanted to make sure his daughter was always on par academically with the other children in the neighborhood as a witness to the benefits of homeschooling.

Yet, a strange thing happened. Both academic goals were achieved in less than four months. Now Jina was getting bored with school already. This meant that Dan now had to raise the level of complexity with more goals

  • recognize numbers
  • Know the sounds of all the letters of the alphabet

By the end of the first year (age 5 now), without any pressure, and by going at her own pace my friend’s daughter could read simple words, count objects, recognize numbers, do simple addition, subtraction, and had the rudiments of telling time. However, near the end of the first year of learning some strange things began to happen.

  • One day Jina would complete a task with no problems but the next day she could not seem to remember the slightest way how to do it. She seemed to inadvertently lose motivation for no reason.
  • Some concepts (telling time) never stuck no matter how many times it was taught and review.
  • She was inconsistent in her ability to recognize words and seemed to lack any ability to generalize concepts (transfer) to other settings. For example, realizing that ‘cap’, ‘snap’, ‘lap’, all end with the -ap ending.

When she turned five, Dan and his wife formally started Jina in an official home school curriculum rather than the ad-hoc stuff they did for the first year. Jina now had the ability to do 1st-grade work thanks to her parents prior teaching. Old struggles subsided and new ones appeared. Unlike the ad-hoc curriculum, the formal home school curriculum had weekly lesson plans and Dan was determined to stick to the “schedule.”

Why the Struggle

Dan still wondered what the problem was. Jina was progressing but it was a chore and I couldn’t understand why. Isn’t it good to start kids in school early? That’s when he asked me.

I explained to him some of the basics of Piaget’s theory of cognitive development. This is not just any theory. Piaget’s ideas are taught to almost all undergrad education majors on the planet.

Piaget proposes that there are four stages of cognitive development

  1. Sensorimotor (0-2 years)-Learning only through senses
  2. Preoperational (2-7)-Symbolic thinking and pretend play
  3. Concrete Operational (7-11)-Ideas applied to literally objects, understand time and quantity.
  4. Formal Operations (12-adult)-Abstract thinking, logic, transfer possible.

Dan was teaching his daughter all of these abstract ideas (counting, reading, telling time, etc) when she was at a preoperational level cognitively.

Reading is a highly abstract experience. Letters on a page have a sound attached to them and these letters can be combined to make words etc.? This is astounding for a child and their minds will struggle with this if they are not ready. Numbers on a page represent an actual amount in the real world? This is another astounding breakthrough for a young child. Dan was teaching his daughter to tell time when she had no idea what time was! He was frustrated when she could not transfer knowledge to new settings when this is normally not possible until they are 11 years or older.

If a child is not developmentally ready for these complex ideas they will struggle with school. If Piaget’s theory is correct (and not everyone agrees), formal schooling should not begin until age 7 for most children. What is meant by formal schooling is the study of math and reading. They should begin learning math and reading at 7. However, traditionally, students have been studying these subjects for several years by the age of 7.

This is not a totally radical idea. Many parents are delaying the enrollment of their child in kindergarten by a year in order to allow them to develop more. The term for this is redshirting

What He Did

By the time I had spoken with Dan Jina was six years old and already in second grade. She was doing better but now Dan and his wife worried about burnout.  He did not want to stop her studies completely because stopping now would mean having to fight with her to begin again. I suggested that they decided to slow down the instruction. Now they complete a weekly lesson plan over two weeks instead of one. This helps to minimize the damage that has taken place while still maintaining a structure of learning in the home. Unfortunately, Jina is learning multiplications when she should be learning to count.

Conclusion

I can say that there is evidence that early education is not best for children. If Piaget is correct a child under 7 is not ready for rigorous study and should be allowed more hands on experiences rather than abstract ones. Of course, there are exceptions but generally, you can start too early but it is difficult to start too late. If a child starts too early they will be in a constant state of struggling. All children are different but I think that parents should be aware that waiting is an option when it comes to formal instruction and one benefit of home schooling is the ability to have authority over your child’s education.

Types of Speaking in ESL

In the context of ESL teaching, ~there are at least five types of speaking that take place in the classroom. This post will define and provide examples of each. The five types are as follows…

  • Imitative
  • Intensive
  • Responsive
  • Interactive
  • Extensive

The list above is ordered from simplest to most complex in terms of the requirements of oral production for the student.

Imitative

At the imitative level, it is probably already clear what the student is trying to do. At this level, the student is simply trying to repeat what was said to them in a way that is understandable and with some adherence to pronunciation as defined by the teacher.

It doesn’t matter if the student comprehends what they are saying or carrying on a conversation. The goal is only to reproduce what was said to them. One common example of this is a “repeat after me” experience in the classroom.

Intensive

Intensive speaking involves producing a limit amount of language in a highly control context. An example of this would be to read aloud a passage or give a direct response to a simple question.

Competency at this level is shown through achieving certain grammatical or lexical mastery. This depends on the teacher’s expectations.

Responsive

Responsive is slightly more complex than intensive but the difference is blurry, to say the least. At this level, the dialog includes a simple question with a follow-up question or two. Conversations take place by this point but are simple in content.

Interactive

The unique feature of intensive speaking is that it is usually more interpersonal than transactional. By interpersonal it is meant speaking for maintaining relationships. Transactional speaking is for sharing information as is common at the responsive level.

The challenge of interpersonal speaking is the context or pragmatics The speaker has to keep in mind the use of slang, humor, ellipsis, etc. when attempting to communicate. This is much more complex than saying yes or no or giving directions to the bathroom in a second language.

Extensive

Extensive communication is normal some sort of monolog. Examples include speech, story-telling, etc. This involves a great deal of preparation and is not typically improvisational communication.

It is one thing to survive having a conversation with someone in a second language. You can rely on each other’s body language to make up for communication challenges. However, with extensive communication either the student can speak in a comprehensible way without relying on feedback or they cannot. In my personal experience, the typical ESL student cannot do this in a convincing manner.

Visualizing Clustered Data in R

In this post, we will look at how to visualize multivariate clustered data. We will use the “Hitters” dataset from the “ISLR” package. We will use the features of the various baseball players as the dimensions for the clustering. Below is the initial code

library(ISLR);library(cluster)
data("Hitters")
str(Hitters)
## 'data.frame':    322 obs. of  20 variables:
##  $ AtBat    : int  293 315 479 496 321 594 185 298 323 401 ...
##  $ Hits     : int  66 81 130 141 87 169 37 73 81 92 ...
##  $ HmRun    : int  1 7 18 20 10 4 1 0 6 17 ...
##  $ Runs     : int  30 24 66 65 39 74 23 24 26 49 ...
##  $ RBI      : int  29 38 72 78 42 51 8 24 32 66 ...
##  $ Walks    : int  14 39 76 37 30 35 21 7 8 65 ...
##  $ Years    : int  1 14 3 11 2 11 2 3 2 13 ...
##  $ CAtBat   : int  293 3449 1624 5628 396 4408 214 509 341 5206 ...
##  $ CHits    : int  66 835 457 1575 101 1133 42 108 86 1332 ...
##  $ CHmRun   : int  1 69 63 225 12 19 1 0 6 253 ...
##  $ CRuns    : int  30 321 224 828 48 501 30 41 32 784 ...
##  $ CRBI     : int  29 414 266 838 46 336 9 37 34 890 ...
##  $ CWalks   : int  14 375 263 354 33 194 24 12 8 866 ...
##  $ League   : Factor w/ 2 levels "A","N": 1 2 1 2 2 1 2 1 2 1 ...
##  $ Division : Factor w/ 2 levels "E","W": 1 2 2 1 1 2 1 2 2 1 ...
##  $ PutOuts  : int  446 632 880 200 805 282 76 121 143 0 ...
##  $ Assists  : int  33 43 82 11 40 421 127 283 290 0 ...
##  $ Errors   : int  20 10 14 3 4 25 7 9 19 0 ...
##  $ Salary   : num  NA 475 480 500 91.5 750 70 100 75 1100 ...
##  $ NewLeague: Factor w/ 2 levels "A","N": 1 2 1 2 2 1 1 1 2 1 ...

Data Preparation

We need to remove all of the factor variables as the kmeans algorithm cannot support factor variables. In addition, we need to remove the “Salary” variable because it is missing data. Lastly, we need to scale the data because the scaling affects the results of the clustering. The code for all of this is below.

hittersScaled<-scale(Hitters[,c(-14,-15,-19,-20)])

Data Analysis

We will set the k for the kmeans to 3. This can be set to any number and it often requires domain knowledge to determine what is most appropriate. Below is the code

kHitters<-kmeans(hittersScaled,3)

We now look at some descriptive stats. First, we will see how many examples are in each cluster.

table(kHitters$cluster)
## 
##   1   2   3 
## 116 144  62

The groups are mostly balanced. Next, we will look at the mean of each feature by cluster. This will be done with the “aggregate” function. We will use the original data and make a list by the three clusters.

round(aggregate(Hitters[,c(-14,-15,-19,-20)],FUN=mean,by=list(kHitters$cluster)),1)
##   Group.1 AtBat  Hits HmRun Runs  RBI Walks Years CAtBat  CHits CHmRun
## 1       1 522.4 143.4  15.1 73.8 66.0  51.7   5.7 2179.1  597.2   51.3
## 2       2 256.6  64.5   5.5 30.9 28.6  24.3   5.6 1377.1  355.6   24.7
## 3       3 404.9 106.7  14.8 54.6 59.4  48.1  15.1 6480.7 1783.4  207.5
##   CRuns  CRBI CWalks PutOuts Assists Errors
## 1 299.2 256.1  199.7   380.2   181.8   11.7
## 2 170.1 143.6  122.2   209.0    62.4    5.8
## 3 908.5 901.8  694.0   303.7    70.3    6.4

Now we can see some difference. It seems group 3 are young (5.6 years of experience) starters based on the number of at-bats they get. Group 1 is young players who may not get to start due to the lower at-bats the receive. Group 2 is old (15.1 years) players who receive significant playing time and have but together impressive career statistics.

Now we will create our visual of the three clusters. For this, we use the “clusplot” function from the “cluster” package.

clusplot(hittersScaled,kHitters$cluster,color = T,shade = T,labels = 4)

1.png

In general, there is little overlap between the clusters. The overlap between groups 1 and 3 may be due to how they both have a similar amount of experience.

Conclusion

Visualizing the clusters can help with developing insights into the groups found during the analysis. This post provided one example of this.

Multidimensional Scale in R

In this post, we will explore multidimensional scaling (MDS) in R. The main benefit of MDS is that it allows you to plot multivariate data into two dimensions. This allows you to create visuals of complex models. In addition, the plotting of MDS allows you to see relationships among examples in a dataset based on how far or close they are to each other.

We will use the “College” dataset from the “ISLR” package to create an MDS of the colleges that are in the data set. Below is some initial code.

library(ISLR);library(ggplot2)
data("College")
str(College)
## 'data.frame':    777 obs. of  18 variables:
##  $ Private    : Factor w/ 2 levels "No","Yes": 2 2 2 2 2 2 2 2 2 2 ...
##  $ Apps       : num  1660 2186 1428 417 193 ...
##  $ Accept     : num  1232 1924 1097 349 146 ...
##  $ Enroll     : num  721 512 336 137 55 158 103 489 227 172 ...
##  $ Top10perc  : num  23 16 22 60 16 38 17 37 30 21 ...
##  $ Top25perc  : num  52 29 50 89 44 62 45 68 63 44 ...
##  $ F.Undergrad: num  2885 2683 1036 510 249 ...
##  $ P.Undergrad: num  537 1227 99 63 869 ...
##  $ Outstate   : num  7440 12280 11250 12960 7560 ...
##  $ Room.Board : num  3300 6450 3750 5450 4120 ...
##  $ Books      : num  450 750 400 450 800 500 500 450 300 660 ...
##  $ Personal   : num  2200 1500 1165 875 1500 ...
##  $ PhD        : num  70 29 53 92 76 67 90 89 79 40 ...
##  $ Terminal   : num  78 30 66 97 72 73 93 100 84 41 ...
##  $ S.F.Ratio  : num  18.1 12.2 12.9 7.7 11.9 9.4 11.5 13.7 11.3 11.5 ...
##  $ perc.alumni: num  12 16 30 37 2 11 26 37 23 15 ...
##  $ Expend     : num  7041 10527 8735 19016 10922 ...
##  $ Grad.Rate  : num  60 56 54 59 15 55 63 73 80 52 ...

Data Preparation

After using the “str” function we know that we need to remove the variable “Private” because it is a factor and type of MDS we are doing can only accommodate numerical variables. After removing this variable we will then make a matrix using the “as.matrix” function. Once the matrix is ready we can use the “cmdscale” function to create the actual two-dimensional MDS. Another point to mention is that for the sake of simplicity, we are only going to use the first ten colleges in the dataset. The reason being that using all 722 will m ake it hard to understand the plots we will make. Below is the code.

collegedata<-as.matrix(College[,-1])
collegemds<-cmdscale(dist(collegedata[1:10,]))

Data Analysis

We can now make our initial plot. The xlim and ylim arguments had to be played with a little for the plot to display properly. In addition, the “text” function was used to provide additional information such as the names of the colleges.

plot(collegemds,xlim=c(-15000,10000),ylim=c(-15000,10000))
text(collegemds[,1],collegemds[,2],rownames(collegemds))

1.png

From the plot, you can see that even with only ten names it is messy. The colleges are mostly clumped together which makes it difficult to interpret. We can plot this with a four quadrant graph using “ggplot2”. First, we need to convert the matrix that we create to a dataframe.

collegemdsdf<-as.data.frame(collegemds)

We are now ready to use “ggplot” to create the four quadrant plot.

p<-ggplot(collegemdsdf, aes(x=V1, y=V2)) +
        geom_point() +
        lims(x=c(-10000,8000),y=c(-4000,5000)) +
        theme_minimal() +
        coord_fixed() +  
        geom_vline(xintercept = 5) + geom_hline(yintercept = 5)+geom_text(aes(label=rownames(collegemdsdf)))
p+theme(panel.grid.major = element_blank(), panel.grid.minor = element_blank())

1

We set the horizontal and vertical line at the x and y-intercept respectively. By doing this it is much easier to understand and interpret the graph. Agnes Scott College is way off to the left while Alaska Pacific University, Abilene Christian College, and even Alderson-Broaddus College are clump together. The rest of the colleges are straddling below the x-axis.

Conclusion

In this example, we took several variables and condense them to two dimensions. This is the primary benefit of MDS. It allows you to visualize was cannot be visualized normally. The visualizing allows you to see the structure of the data from which you can draw inferences.

Topics Models in R

Topic models is a tool that can group text by their main themes. It involves the use of probability based on word frequencies. The algorithm that does this is called the Latent Dirichlet Allocation algorithm.

IN this post, we will use some text mining tools to analyze religious/philosophical text the five texts we will look at are The King James Bible The Quran The Book Of Mormon The Gospel of Buddha Meditations, by Marcus Aurelius

The link for access to these five text files is as follows https://www.kaggle.com/tentotheminus9/religious-and-philosophical-texts/downloads/religious-and-philosophical-texts.zip

Once you unzip it you will need to rename each file appropriately.

The next few paragraphs are almost verbatim from the post text mining in R. This is because the data preparation is essentially the same. Small changes were made but original material is found in the analysis section of this post.

We will now begin the actual analysis. The package we need or “tm” and “topicmodels” Below is some initial code.

library(tm);library(topicmodels)

Data Preparation

We need to do three things for each text file

  1. Paste it
  2. convert it
  3. write a table

Below is the code for pasting the text into R. Keep in mind that your code will be slightly different as the location of the file on your computer will be different. The “what” argument tells are what to take from the file and the “Collapse” argument deals with whitespace

bible<-paste(scan(file ="/home/darrin/Desktop/speech/bible.txt",what='character'),collapse=" ")
buddha<-paste(scan(file ="/home/darrin/Desktop/speech/buddha.txt",what='character'),collapse=" ")
meditations<-paste(scan(file ="/home/darrin/Desktop/speech/meditations.txt",what='character'),collapse=" ")
mormon<-paste(scan(file ="/home/darrin/Desktop/speech/mormon.txt",what='character'),collapse=" ")
quran<-paste(scan(file ="/home/darrin/Desktop/speech/quran.txt",what='character'),collapse=" ")

Now we need to convert the new objects we created to ASCII text. This removes a lot of “funny” characters from the objects. For this, we use the “iconv” function. Below is the code.

bible<-iconv(bible,"latin1","ASCII","")
meditations<-iconv(meditations,"latin1","ASCII","")
buddha<-iconv(buddha,"latin1","ASCII","")
mormon<-iconv(mormon,"latin1","ASCII","")
quran<-iconv(quran,"latin1","ASCII","")

The last step of the preparation is the creation of tables. What you are doing is you are taking the objects you have already created and are moving them to their own folder. The text files need to be alone in order to conduct the analysis. Below is the code.

write.table(bible,"/home/darrin/Documents/R working directory/textminingegw/mine/bible.txt")
write.table(meditations,"/home/darrin/Documents/R working directory/textminingegw/mine/meditations.txt")
write.table(buddha,"/home/darrin/Documents/R working directory/textminingegw/mine/buddha.txt")
write.table(mormon,"/home/darrin/Documents/R working directory/textminingegw/mine/mormon.txt")
write.table(quran,"/home/darrin/Documents/R working directory/textminingegw/mine/quran.txt")

Corpus Development

We are now ready to create the corpus. This is the object we use to clean the text together rather than individually as before. First, we need to make the corpus object, below is the code. Notice how it contains the directory where are tables are

docs<-Corpus(DirSource("/home/darrin/Documents/R working directory/textminingegw/mine"))

There are many different ways to prepare the corpus. For our example, we will do the following…

lower case all letters-This avoids the same word be counted separately (ie sheep and Sheep)

  • Remove numbers
  • Remove punctuation-Simplifies the document
  • Remove whitespace-Simplifies the document
  • Remove stopwords-Words that have a function but not a meaning (ie to, the, this, etc)
  • Remove custom words-Provides additional clarity

Below is the code for this

docs<-tm_map(docs,tolower)
docs<-tm_map(docs,removeNumbers)
docs<-tm_map(docs,removePunctuation)
docs<-tm_map(docs,removeWords,stopwords('english'))
docs<-tm_map(docs,stripWhitespace)
docs<-tm_map(docs,removeWords,c("chapter","also","no","thee","thy","hath","thou","thus","may",
                                "thee","even","yet","every","said","this","can","unto","upon",
                                "cant",'shall',"will","that","weve","dont","wont"))

We now need to create the matrix. The document matrix is what r will actually analyze. We will then remove sparse terms. Sparse terms are terms that do not occur are a certain percentage in the matrix. For our purposes, we will set the sparsity to .60. This means that a word must appear in 3 of the 5 books of our analysis. Below is the code. The ‘dim’ function will allow you to see how the number of terms is reduced drastically. This is done without losing a great deal of data will speeding up computational time.

dtm<-DocumentTermMatrix(docs)
dim(dtm)
## [1]     5 24368
dtm<-removeSparseTerms(dtm,0.6)
dim(dtm)
## [1]    5 5265

Analysis

We will now create our topics or themes. If there is no a priori information on how many topics to make it os up to you to decide how many. We will create three topics. The “LDA” function is used and the argument “k” is set to three indicating we want three topics. Below is the code

set.seed(123)
lda3<-LDA(dtm,k=3)

We can see which topic each book was assigned to using the “topics” function. Below is the code.

topics(lda3)
##       bible.txt      buddha.txt meditations.txt      mormon.txt 
##               2               3               3               1 
##       quran.txt 
##               3

According to the results. The book of Mormon and the Bible were so unique that they each had their own topic (1 and 3). The other three text (Buddha, Meditations, and the Book of Mormon) were all placed in topic 2. It’s surprising that the Bible and the Book of Mormon were in separate topics since they are both Christian text. It is also surprising the Book by Buddha, Meditations, and the Quran are all under the same topic as it seems that these texts have nothing in common.

We can also use the “terms” function to see what the most common words are for each topic. The first argument in the function is the model name followed by the number of words you want to see. We will look at 10 words per topic.

terms(lda3, 10)
##       Topic 1  Topic 2  Topic 3 
##  [1,] "people" "lord"   "god"   
##  [2,] "came"   "god"    "one"   
##  [3,] "god"    "israel" "things"
##  [4,] "behold" "man"    "say"   
##  [5,] "pass"   "son"    "truth" 
##  [6,] "lord"   "king"   "man"   
##  [7,] "yea"    "house"  "lord"  
##  [8,] "land"   "one"    "life"  
##  [9,] "now"    "come"   "see"   
## [10,] "things" "people" "good"

Interpreting these results takes qualitative skills and is subjective. They all seem to be talking about the same thing. Topic 3 (Bible) seems to focus on Israel and Lord while topic 1 (Mormon) is about God and people. Topic 2 (Buddha, Meditations, and Quran) speak of god as well but the emphasis has moved to truth and the word one.

Conclusion

This post provided insight into developing topic models using R. The results of a topic model analysis is highly subjective and will often require strong domain knowledge. Furthermore, the number of topics is highly flexible as well and in the example in this post we could have had different numbers of topics for comparative purposes.

Authentic Listening Tasks

There are many different ways in which a teacher can assess the listening skills of their students. Recognition, paraphrasing, cloze tasks, transfer, etc. are all ways to determine a student’s listening proficiency.

One criticism of the task above is that they are primarily inauthentic. This means that they do not strongly reflect something that happens in the real world.

In response to this, several authentic listening assessments have been developed over the years. These authentic listening assessments include the following.

  • Editing
  • Note-taking
  • Retelling
  • Interpretation

This post will each of the authentic listening assessments listed above.

Editing

An editing task that involves listening involves the student receiving reading material. The student reviews the reading material and then listens to a recording of someone reading aloud the same material. The student then marks the hard copy they have when there are differences between the reading and what the recording is saying.

Such an assessment requires the student to carefully for discrepancies between the reading material and the recording. This requires strong reading abilities and phonological knowledge.

Note-Taking

For those who are developing language skills for academic reasons. Note-taking is a highly authentic form of assessment. In this approach, the students listen to some type of lecture and attempt to write down what they believe is important from the lecture.

The students are then assessed by on some sort of rubric/criteria developed by the teacher. As such, marking note-taking can be highly subjective. However, the authenticity of note-taking can make it a valuable learning experience even if providing a grade is difficult.

Retelling

How retelling works should be somewhat obvious. The student listens to some form of talk. After listening, the student needs to retell or summarize what they heard.

Assessing the accuracy of the retelling has the same challenges as the note-taking assessment. However, it may be better to use retelling to encourage learning rather than provide evidence of the mastery of a skill.

Interpretation

Interpretation involves the students listening to some sort of input. After listening, the student then needs to infer the meaning of what they heard. The input can be a song, poem, news report, etc.

For example, if the student listens to a song they may be asked to explain why the singer was happy or sad depending on the context of the song. Naturally, they cannot hope to answer such a question unless they understood what they were listening too.

Conclusion

Listening does not need to be artificial. There are several ways to make learning task authentic. The examples in this post are just some of the potential ways

Text Mining in R

text mining is descriptive analysis tool that is applied to unstructured textual data. By unstructured, it is meant data that is not stored in relational databases. The majority of data on the Internet and the business world, in general, is of an unstructured nature. As such, the use of text mining tools has grown in importance over the past two decades.

In this post, we will use some text mining tools to analyze religious/philosophical text the five texts we will look at are

  • The King James Bible
  • The Quran
  • The Book Of Mormon
  • The Gospel of Buddha
  • Meditations, by Marcus Aurelius

The link for access to these five text files is as follows

https://www.kaggle.com/tentotheminus9/religious-and-philosophical-texts/downloads/religious-and-philosophical-texts.zip

Once you unzip it you will need to rename each file appropriately.

The actual process of text mining is rather simple and does not involve a great deal of complex coding compared to other machine learning applications. Primarily you need to do the follow Prep the data by first scanning it into r, converting it to ASCII format, and creating the write table for each text Create a corpus that is then cleaned of unnecessary characters Conduct the actual descriptive analysis

We will now begin the actual analysis. The package we need or “tm” for text mining, “wordcloud”, and “RColorBrewer” for visuals. Below is some initial code.

library(tm);library(wordcloud);library(RColorBrewer)

Data Preparation

We need to do three things for each text file

  • Paste
  •  convert it
  • write a table

Below is the code for pasting the text into R. Keep in mind that your code will be slightly different as the location of the file on your computer will be different. The “what” argument tells are what to take from the file and the “Collapse” argument deals with whitespace

bible<-paste(scan(file ="/home/darrin/Desktop/speech/bible.txt",what='character'),collapse=" ")
buddha<-paste(scan(file ="/home/darrin/Desktop/speech/buddha.txt",what='character'),collapse=" ")
meditations<-paste(scan(file ="/home/darrin/Desktop/speech/meditations.txt",what='character'),collapse=" ")
mormon<-paste(scan(file ="/home/darrin/Desktop/speech/mormon.txt",what='character'),collapse=" ")
quran<-paste(scan(file ="/home/darrin/Desktop/speech/quran.txt",what='character'),collapse=" ")

Now we need to convert the new objects we created to ASCII text. This removes a lot of “funny” characters from the objects. For this, we use the “iconv” function. Below is the code.

bible<-iconv(bible,"latin1","ASCII","")
meditations<-iconv(meditations,"latin1","ASCII","")
buddha<-iconv(buddha,"latin1","ASCII","")
mormon<-iconv(mormon,"latin1","ASCII","")
quran<-iconv(quran,"latin1","ASCII","")

The last step of the preparation is the creation of tables. Primarily you are taken the objects you have already created and moved them to their own folder. The text files need to be alone in order to conduct the analysis. Below is the code.

write.table(bible,"/home/darrin/Documents/R working directory/textminingegw/mine/bible.txt")
write.table(meditations,"/home/darrin/Documents/R working directory/textminingegw/mine/meditations.txt")
write.table(buddha,"/home/darrin/Documents/R working directory/textminingegw/mine/buddha.txt")
write.table(mormon,"/home/darrin/Documents/R working directory/textminingegw/mine/mormon.txt")
write.table(quran,"/home/darrin/Documents/R working directory/textminingegw/mine/quran.txt")

For fun, you can see a snippet of each object by simply typing its name into r as shown below.

bible
##[1] "x 1 The Project Gutenberg EBook of The King James Bible This eBook is for the use of anyone anywhere at no cost and with almost no restrictions whatsoever. You may copy it, give it away or re-use it under the terms of the Project Gutenberg License included with this eBook or online at www.gutenberg.org Title: The King James Bible Release Date: March 2, 2011 [EBook #10] [This King James Bible was orginally posted by Project Gutenberg in late 1989] Language: English *** START OF THIS PROJECT

Corpus Creation

We are now ready to create the corpus. This is the object we use to clean the text together rather than individually as before. First, we need to make the corpus object, below is the code. Notice how it contains the directory where are tables are

docs<-Corpus(DirSource("/home/darrin/Documents/R working directory/textminingegw/mine"))

There are many different ways to prepare the corpus. For our example, we will do the following… lower case all letters-This avoids the same word be counted separately (ie sheep and Sheep) Remove numbers Remove punctuation-Simplifies the document Remove whitespace-Simplifies the document Remove stopwords-Words that have a function but not a meaning (ie to, the, this, etc) Remove custom words-Provides additional clarity

lower case all letters-This avoids the same word be counted separately (ie sheep and Sheep) Remove numbers Remove punctuation-Simplifies the document Remove whitespace-Simplifies the document Remove stopwords-Words that have a function but not a meaning (ie to, the, this, etc) Remove custom words-Provides additional clarity

Remove numbers Remove punctuation-Simplifies the document Remove whitespace-Simplifies the document Remove stopwords-Words that have a function but not a meaning (ie to, the, this, etc) Remove custom words-Provides additional clarity

Remove punctuation-Simplifies the document Remove whitespace-Simplifies the document Remove stopwords-Words that have a function but not a meaning (ie to, the, this, etc) Remove custom words-Provides additional clarity

Remove whitespace-Simplifies the document Remove stopwords-Words that have a function but not a meaning (ie to, the, this, etc) Remove custom words-Provides additional clarity

Below is the code for this

docs<-tm_map(docs,tolower)
docs<-tm_map(docs,removeNumbers)
docs<-tm_map(docs,removePunctuation)
docs<-tm_map(docs,removeWords,stopwords('english'))
docs<-tm_map(docs,stripWhitespace)
#docs<-tm_map(docs,stemDocument)
docs<-tm_map(docs,removeWords,c("chapter","also","no","thee","thy","hath","thou","thus","may",
                                "thee","even","yet","every","said","this","can","unto","upon",
                                "cant",'shall',"will","that","weve","dont","wont"))

We now need to create the matrix. The document matrix is what r will actually analyze. We will then remove sparse terms. Sparse terms are terms that do not occur are a certain percentage in the matrix. For our purposes, we will set the sparsity to .60. This means that a word mus appear in 3 of the 5 books of our analysis. Below is the code. The ‘dim’ function will allow you to see how the number of terms is reduced drastically. This is done without losing a great deal of data will speeding up computational time.

dtm<-DocumentTermMatrix(docs)
dim(dtm)
## [1]     5 24368
dtm<-removeSparseTerms(dtm,0.6)
dim(dtm)
## [1]    5 5265

Analysis

We now can explore the text. First, we need to make a matrix that has the sum of the columns od the document term matrix. Then we need to change the order of the matrix to have the most frequent terms first. Below is the code for this.

freq<-colSums(as.matrix(dtm))
ord<-order(-freq)#changes the order to descending

We can now make a simple bar plot to see what the most common words are. Below is the code

barplot(freq[head(ord)])

1

As expected with religious text. The most common term are religious terms. You can also determine what words appeared least often with the code below.

freq[tail(ord)]
##   posting   secured    smiled      sway swiftness worthless 
##         3         3         3         3         3         3

Notice how each word appeared 3 times. This may mean that the 3 terms appear once in three of the five books. Remember we set the sparsity to .60 or 3/5.

Another analysis is to determine how many words appear a certain number of times. For example, how many words appear 200 times or 300. Below is the code.

head(table(freq))
## freq
##   3   4   5   6   7   8 
## 117 230 172 192 191 187

Using the “head” function and the “table” function gives us the six most common values of word frequencies. Three words appear 117 times, four appear 230 times, etc. Remember the “head” gives the first few values regardless of their amount

The “findFreqTerms” function allows you to set a cutoff point of how frequent a word needs to be. For example, if we want to know how many words appeared 3000 times we would use the following code.

findFreqTerms(dtm,3000)
##  [1] "behold" "came"   "come"   "god"    "land"   "lord"   "man"   
##  [8] "now"    "one"    "people"

The “findAssocs” function finds the correlation between two words in the text. This provides insight into how frequently these words appear together. For our example, we will see which words are associated with war, which is a common subject in many religious texts. We will set the correlation high to keep the list short for the blog post. Below is the code

findAssocs(dtm,"war",corlimit =.998) 
## $war
##     arrows      bands   buildeth    captive      cords     making 
##          1          1          1          1          1          1 
##  perisheth prosperity      tower      wages      yield 
##          1          1          1          1          1

The interpretation of the results can take many forms. It makes sense for ‘arrows’ and ‘captives’ to be associated with ‘war’ but ‘yield’ seems confusing. We also do not know the sample size of the associations.

Our last technique is the development of a word cloud. This allows you to see word frequency based on where the word is located in the cloud as well as its size. For our example, we will set it so that a word must appear at least 1000 times in the corpus with more common words in the middle. Below is the code.

wordcloud(names(freq),freq,min.freq=1000,scale=c(3,.5),colors=brewer.pal(6,"Dark2"),random.color = F,random.order = F)

1.png

Conclusion

This post provided an introduction to text mining in R. There are many more complex features that are available for the more serious user of R than what is described here

Responsive Listening Assessment

Responsive listening involves listening to a small amount of language such as command, question, or greeting. After listening, the student is expected to develop an appropriate short response. In this post, we will examine two examples of the use of responsive listening. These two examples are…

  • Open-ended response to question
  • Suitable response to a question

Open-Ended Responsive Listening

When an open-ended item is used in responsive listening it involves the student listening to a question and provided an answer that suits the context of the question. For example,

Listener hears: What country are you from
Student writes: _______________________________

Assessing the answer is determined by whether the student was able to develop an answer that is appropriate. The opened nature of the question allows for creativity and expressiveness.

A drawback to the openness is determining the correctness of them. You have to decide if misspellings, synonyms, etc are wrong answers.  The are strong arguments for and against any small mistake among ESL teachers. Generally, communicate policies trump concerns of grammatical and orthography.

Suitable Response to a Question

Suitable response items often use multiple choice answers that the student select from in order to complete the question. Below is an example.

Listener hears: What country is Steven from
Student picks:
a. Thailand
b. Cambodia
c. Philippines
d. Laos

Based on the recording the student would need to indicate the correct response. The multiple-choice limits the number of options the student has in replying. This can in many ways making determining the answer much easier than short answer. No matter what, the student has a 25% chance of being correct in our example.

Since multiple-choice is used it is important to remember that all the strengths and weaknesses of multiple-choice items.This can be good or bad depending on where your students are at in their listening ability.

Conclusion

Responsive listening assessment allows a student to supply an answer to a question that is derived from what they were listening too.This is in many ways a practical way to assess an individual’s basic understanding of a conversation.

Intensive Listening and ESL

Intensive listening is listening for the elements (phonemes, intonation, etc.) in words and sentences. This form of listening is often assessed in an ESL setting as a way to measure an individual’s phonological,  morphological, and ability to paraphrase. In this post, we will look at these three forms of assessment with examples.

Phonological Elements

Phonological elements include phonemic consonant and phonemic vowel pairs. Phonemic consonant pair has to do with identifying consonants. Below is an example of what an ESL student would hear followed by potential choices they may have on a multiple-choice test.

Recording: He’s from Thailand

Choices:
(a) He’s from Thailand
(b) She’s from Thailand

The answer is clearly (a). The confusion is with the adding of ‘s’ for choice (b). If someone is not listening carefully they could make a mistake. Below is an example of phonemic pairs involving vowels

Recording: The girl is leaving?

Choices:
(a)The girl is leaving?
(b)The girl is living?

Again, if someone is not listening carefully they will miss the small change in the vowel.

Morphological Elements

Morphological elements follow the same approach as phonological elements. You can manipulate endings, stress patterns, or play with words.  Below is an example of ending manipulation.

Recording: I smiled a lot.

Choices:
(a) I smiled a lot.
(b) I smile a lot.

I sharp listener needs to hear the ‘d’ sound at the end of the word ‘smile’ which can be challenging for ESL student. Below is an example of stress pattern

Recording: My friend doesn’t smoke.

Choices:
(a) My friend doesn’t smoke.
(b) My friend does smoke.

The contraction in the example is the stress pattern the listener needs to hear. Below is an example of a play with words.

Recording: wine

Choices:
(a) wine
(b) vine

This is especially tricky for languages that do not have both a ‘v’ and ‘w’ sound, such as the Thai language.

Paraphrase recognition

Paraphrase recognition involves listening to an example of being able to reword it in an appropriate manner. This involves not only listening but also vocabulary selection and summarizing skills. Below is one example of sentence paraphrasing

Recording: My name is James. I come from California

Choices:
(a) James is Californian
(b) James loves Calfornia

This is trickier because both can be true. However, the goal is to try and rephrase what was heard.  Another form of paraphrasing is dialogue paraphrasing as shown below

Recording: 

Man: My name is Thomas. What is your name?
Woman: My name is Janet. Nice to meet you. Are you from Africa
Man: No, I am an American

Choices:
(a) Thomas is from America
(b)Thomas is African

You can see the slight rephrase that is wrong with choice (b). This requires the student to listen to slightly longer audio while still have to rephrase it appropriately.

Conclusion

Intensive listening involves the use of listening for the little details of an audio. This is a skill that provides a foundation for much more complex levels of listening.

Critical Language Testing

Critical language testing (CLT) is a philosophical approach that states that there is widespread bias in language testing. This view is derived from critical pedagogy, which views education as a process manipulated by those in power.

There are many criticisms that CLT has of language testing such as the following.

  • Test are deeply influenced by the culture of the test makers
  • There is  a political dimension to tests
  • Tests should provide various modes of performance because of the diversity in how students learn.

Testing and Culture

CLT claim that tests are influenced by the culture of the test-makers. This puts people from other cultures at a disadvantage when taking the test.

An example of bias would be a reading comprehension test that uses a reading passage that reflects a middle class, white family. For many people, such an experience is unknown for them. When they try to answer the questions they lack the contextual knowledge of someone who is familiar with this kind of situation and this puts outsiders at a disadvantage.

Although the complaint is valid there is little that can be done to rectify it. There is no single culture that everyone is familiar with. The best that can be done is to try to diverse examples for a diverse audience.

Politics and Testing

Politics and testing is closely related to the prior topic of culture. CLT claims that testing can be used to support the agenda of those who made the test. For example, those in power can make a test that those who are not in power cannot pass. This allows those in power to maintain their hegemony. An example of this would be the literacy test that African Americans were

An example of this would be the literacy test that African Americans were required to pass in order to vote. Since most African MAericans could not read the were legally denied the right to vote. This is language testing being used to suppress a minority group.

Various Modes of Assessment

CLT also claims that there should be various modes of assessing. This critique comes from the known fact that not all students do well in traditional testing modes. Furthermore, it is also well-documented that students have multiple intelligences.

It is hard to refute the claim for diverse testing methods. The primary problem is the practicality of such a request. Various assessment methods are normally impractical but they also affect the validity of the assessment. Again, most of the time testing works and it hard to make exceptions.

Conclusion

CLT provides an important perspective on the use of assessment in language teaching. These concerns should be in the minds of test makers as they try to continue to improve how they develop assessments. This holds true even if the concerns of CLT cannot be addressed.

 

Binary Recommendation Engines in R

In this post, we will look at recommendation engines using binary information. For a binary recommendation engine, it requires that the data rates the product as good/bad or some other system in which only two responses are possible. The “recommendarlab” package is needed for this analysis and we will use the ratings of movies from grouplens.org for this post.

url http://grouplens.org/datasets/movielens/latest/

If you follow along you want to download the “small dataset” and use the “ratings.csv” and the “movies.csv”. We will then merge these two datasets based on the variable “movieId” the url is below is the initial code

library(recommenderlab) ratings <- read.csv("~/Downloads/ml-latest-small/ratings.csv")#load ratings data
movies <- read.csv("~/Downloads/ml-latest-small/movies.csv")#load movies data
movieRatings<-merge(ratings, movies, by='movieId')#merge movies and ratings data

We now need to convert are “movieRatings” data frame to a matrix that the “recommendarlab” can use. After doing this we need to indicate that we are doing a binary engine by setting the minimum rating to 2.5. What this means is that anything above 2.5 is in one category and anything below 2.5 is in a different category. We use the “binarize” function to do this. Below is the code

movieRatings<-as(movieRatings,"realRatingMatrix")
movie.bin<-binarize(movieRatings,minRating=2.5)

We need to use a subset of our data. We need each row to have a certain minimum number of ratings. For this analysis, we need at least ten ratings per row. Below is the code for this.

movie.bin<-movie.bin[rowCounts(movie.bin)>10]
movie.bin
## 1817 x 671 rating matrix of class 'binaryRatingMatrix' with 68643 ratings.

Next, we need to setup the evaluation scheme. We use the function and plug in the data, method of evaluation, number of folds, and the given number of ratings. The code is as follows.

set.seed(456)
e.bin<-evaluationScheme(movie.bin,method='cross-validation',k=5,given=10)

We now make a list that holds all the models we want to run. We will run four models “popular”, “random”, “ubcf”, and “ibcf”. We will then use the “evaluate” function to see how accurate are models are for 5,10,15, and 20 items.

algorithms.bin<-list(POPULAR=list(name="POPULAR",param=NULL),
                     RAND=list(name="RANDOM"),UBCF=list(name="UBCF"),IBCF=list(name="IBCF")) 
results.bin<-evaluate(e.bin,algorithms.bin,n=c(5,10,15,20))

The “avg” function will help us to see how are models did. Below are the results

avg(results.bin)
## $POPULAR
##          TP        FP       FN       TN precision     recall        TPR
## 5  1.518356  3.481644 26.16877 629.8312 0.3036712 0.09293487 0.09293487
## 10 2.792329  7.207671 24.89479 626.1052 0.2792329 0.15074799 0.15074799
## 15 3.916164 11.083836 23.77096 622.2290 0.2610776 0.20512093 0.20512093
## 20 4.861370 15.138630 22.82575 618.1742 0.2430685 0.24831787 0.24831787
##            FPR
## 5  0.005426716
## 10 0.011221837
## 15 0.017266489
## 20 0.023608749
## 
## $RAND
##           TP        FP       FN       TN  precision      recall
## 5  0.2120548  4.787945 27.47507 628.5249 0.04241096 0.007530989
## 10 0.4104110  9.589589 27.27671 623.7233 0.04104110 0.015611349
## 15 0.6241096 14.375890 27.06301 618.9370 0.04160731 0.023631305
## 20 0.8460274 19.153973 26.84110 614.1589 0.04230137 0.033130430
##            TPR         FPR
## 5  0.007530989 0.007559594
## 10 0.015611349 0.015146399
## 15 0.023631305 0.022702057
## 20 0.033130430 0.030246522
## 
## $UBCF
##          TP        FP       FN       TN precision    recall       TPR
## 5  2.175890  2.824110 25.51123 630.4888 0.4351781 0.1582319 0.1582319
## 10 3.740274  6.259726 23.94685 627.0532 0.3740274 0.2504990 0.2504990
## 15 5.054795  9.945205 22.63233 623.3677 0.3369863 0.3182356 0.3182356
## 20 6.172603 13.827397 21.51452 619.4855 0.3086301 0.3748969 0.3748969
##            FPR
## 5  0.004387006
## 10 0.009740306
## 15 0.015492088
## 20 0.021557381
## 
## $IBCF
##          TP        FP       FN       TN precision     recall        TPR
## 5  1.330411  3.669589 26.35671 629.6433 0.2660822 0.08190126 0.08190126
## 10 2.442192  7.557808 25.24493 625.7551 0.2442192 0.13786523 0.13786523
## 15 3.532603 11.467397 24.15452 621.8455 0.2355068 0.19010813 0.19010813
## 20 4.546301 15.453699 23.14082 617.8592 0.2273151 0.23494969 0.23494969
##            FPR
## 5  0.005727386
## 10 0.011801682
## 15 0.017900255
## 20 0.024124329

The results are pretty bad for all models. The TPR (true positive rate) is always below .4. We can make a visual of the results by creating a ROC using the TPR/FPR as well as precision/recall.

plot(results.bin,legend="topleft",annotate=T)

1.png

plot(results.bin,"prec",legend="topleft",annotate=T)

1.png

The visual makes it clear that the UBCF model is the best.

Conclusion

This post provided an example of the development of an algorithm for binary recommendations.

Developing Standardized Tests

For better or worst, standardized testing is a part of the educational experience of most students and teachers. The purpose here is not to attack or defend their use. Instead, in this post, we will look at how standardized test are developed.

There are primarily about 6 steps in developing a standardized test. These steps are

  1. Determine the goals
  2. Develop the specifications
  3. Create and evaluate test items
  4. Determine scoring and reporting
  5. Continue further development

Determing Goals

The goals of a standardized test are similar to the purpose statement of a research paper in that the determine the scope of the test. By scope, it is meant what the test will and perhaps will not do. This is important in terms of setting the direction for the rest of the project.

For example, the  TOEFL purpose is to evaluate English proficeny. This means that the TOEFL does not deal with science, math, or other subjects. This seems silly for many but this purpose makes it clear what the TOEFL is about.

Develop the Specifications

Specifications have to do with the structure of the test. For example, a test can have multiple-choice, short answer, essay, fill in the blank, etc. The structure of the test needs to be determined in order to decide what types of items to create.

Most standardized tests are primarily multiple-choice. This is due to the scale on which the test are given. However, some language tests are including a writing component as well now.

Create Test Items

Once the structure is set it is now necessary to develop the actual items for the test. This involves a lot with item response theory (IRT) and the use of statistics. There is also a need to ensure that the items measure the actual constructs of the subject domain.

For example, the TOEFL must be sure that it is really measuring language skills. This is done through consulting experts as well as statistical analysis to know for certain they are measuring English proficiency. The items come from a bank and are tested and retested.

Determine Scoring and Reporting

The scoring and reporting need to be considered. How many points is each item worth? What is the weight of one section of the test? Is the test norm-referenced or criterion-referenced? How many people will mark each test?These are some of the questions to consider.

The scoring and reporting matter a great deal because the scores can affect a person’s life significantly. Therefore, this aspect of standardized testing is treated with great care.

Further Development

A completed standardized test needs to be continuously reevaluated. Ideas and theories in a body of knowledge change frequently and this needs to be taken into account as the test goes forward.

For example, the SAT over the years has changed the point values of their test as well as added a writing component. This was done in reaction to concerns about the test.

Conclusion

The concepts behind developing standardize test can be useful for even teachers making their own assessments. There is no need to follow this process as rigorously. However, familiarity with this strict format can help guide assessment development for many different situations.

Recommendation Engines in R

In this post, we will look at how to make a recommendation engine. We will use data that makes recommendations about movies. We will use the “recommenderlab” package to build several different engines. The data comes from

http://grouplens.org/datasets/movielens/latest/

At this link, you need to download the “ml-latest.zip”. From there, we will use the “ratings” and “movies” files in this post. Ratings provide the ratings of the movies while movies provide the names of the movies. Before going further it is important to know that the “recommenderlab” has five different techniques for developing recommendation engines (IBCF, UBCF, POPULAR, RANDOM, & SVD). We will use all of them for comparative purposes Below is the code for getting started.

library(recommenderlab)
ratings <- read.csv("~/Downloads/ml-latest-small/ratings.csv")
movies <- read.csv("~/Downloads/ml-latest-small/movies.csv")

We now need to merge the two datasets so that they become one. This way the titles and ratings are in one place. We will then coerce our “movieRatings” dataframe into a “realRatingMatrix” in order to continue our analysis. Below is the code

movieRatings<-merge(ratings, movies, by='movieId') #merge two files
movieRatings<-as(movieRatings,"realRatingMatrix") #coerce to realRatingMatrix

We will now create two histograms of the ratings. The first is raw data and the second will be normalized data. The function “getRatings” is used in combination with the “hist” function to make the histogram. The normalized data includes the “normalize” function. Below is the code.

hist(getRatings(movieRatings),breaks =10)

1.png

hist(getRatings(normalize(movieRatings)),breaks =10)

1.png

We are now ready to create the evaluation scheme for our analysis. In this object we need to set the data name (movieRatings), the method we want to use (cross-validation), the amount of data we want to use for the training set (80%), how many ratings the algorithm is given during the test set (1) with the rest being used to compute the error. We also need to tell R what a good rating is (4 or higher) and the number of folds for the cross-validation (10). Below is the code for all of this.

set.seed(123)
eSetup<-evaluationScheme(movieRatings,method='cross-validation',train=.8,given=1,goodRating=4,k=10)

Below is the code for developing our models. To do this we need to use the “Recommender” function and the “getData” function to get the dataset. Remember we are using all six modeling techniques

ubcf<-Recommender(getData(eSetup,"train"),"UBCF")
ibcf<-Recommender(getData(eSetup,"train"),"IBCF")
svd<-Recommender(getData(eSetup,"train"),"svd")
popular<-Recommender(getData(eSetup,"train"),"POPULAR")
random<-Recommender(getData(eSetup,"train"),"RANDOM")

The models have been created. We can now make our predictions using the “predict” function in addition to the “getData” function. We also need to set the argument “type” to “ratings”. Below is the code.

ubcf_pred<-predict(ubcf,getData(eSetup,"known"),type="ratings")
ibcf_pred<-predict(ibcf,getData(eSetup,"known"),type="ratings")
svd_pred<-predict(svd,getData(eSetup,"known"),type="ratings")
pop_pred<-predict(popular,getData(eSetup,"known"),type="ratings")
rand_pred<-predict(random,getData(eSetup,"known"),type="ratings")

We can now look at the accuracy of the models. We will do this in two steps. First, we will look at the error rates. After completing this, we will do a more detailed analysis of the stronger models. Below is the code for the first step

ubcf_error<-calcPredictionAccuracy(ubcf_pred,getData(eSetup,"unknown")) #calculate error
ibcf_error<-calcPredictionAccuracy(ibcf_pred,getData(eSetup,"unknown"))
svd_error<-calcPredictionAccuracy(svd_pred,getData(eSetup,"unknown"))
pop_error<-calcPredictionAccuracy(pop_pred,getData(eSetup,"unknown"))
rand_error<-calcPredictionAccuracy(rand_pred,getData(eSetup,"unknown"))
error<-rbind(ubcf_error,ibcf_error,svd_error,pop_error,rand_error) #combine objects into one data frame
rownames(error)<-c("UBCF","IBCF","SVD","POP","RAND") #give names to rows
error
##          RMSE      MSE       MAE
## UBCF 1.278074 1.633473 0.9680428
## IBCF 1.484129 2.202640 1.1049733
## SVD  1.277550 1.632135 0.9679505
## POP  1.224838 1.500228 0.9255929
## RAND 1.455207 2.117628 1.1354987

The results indicate that the “RAND” and “IBCF” models are clearly worst than the remaining three. We will now move to the second step and take a closer look at the “UBCF”, “SVD”, and “POP” models. We will do this by making a list and using the “evaluate” function to get other model evaluation metrics. We will make a list called “algorithms” and store the three strongest models. Then we will make a objective called “evlist” in this object we will use the “evaluate” function as well as called the evaluation scheme “esetup”, the list (“algorithms”) as well as the number of movies to assess (5,10,15,20)

algorithms<-list(POPULAR=list(name="POPULAR"),SVD=list(name="SVD"),UBCF=list(name="UBCF"))
evlist<-evaluate(eSetup,algorithms,n=c(5,10,15,20))
avg(evlist)
## $POPULAR
##           TP        FP       FN       TN  precision     recall        TPR
## 5  0.3010965  3.033333 4.917105 661.7485 0.09028443 0.07670381 0.07670381
## 10 0.4539474  6.214912 4.764254 658.5669 0.06806016 0.11289681 0.11289681
## 15 0.5953947  9.407895 4.622807 655.3739 0.05950450 0.14080354 0.14080354
## 20 0.6839912 12.653728 4.534211 652.1281 0.05127635 0.16024740 0.16024740
##            FPR
## 5  0.004566269
## 10 0.009363021
## 15 0.014177091
## 20 0.019075070
## 
## $SVD
##           TP        FP       FN       TN  precision     recall        TPR
## 5  0.1025219  3.231908 5.115680 661.5499 0.03077788 0.00968336 0.00968336
## 10 0.1808114  6.488048 5.037390 658.2938 0.02713505 0.01625454 0.01625454
## 15 0.2619518  9.741338 4.956250 655.0405 0.02620515 0.02716656 0.02716656
## 20 0.3313596 13.006360 4.886842 651.7754 0.02486232 0.03698768 0.03698768
##            FPR
## 5  0.004871678
## 10 0.009782266
## 15 0.014689510
## 20 0.019615377
## 
## $UBCF
##           TP        FP       FN       TN  precision     recall        TPR
## 5  0.1210526  2.968860 5.097149 661.8129 0.03916652 0.01481106 0.01481106
## 10 0.2075658  5.972259 5.010636 658.8095 0.03357173 0.02352752 0.02352752
## 15 0.3028509  8.966886 4.915351 655.8149 0.03266321 0.03720717 0.03720717
## 20 0.3813596 11.978289 4.836842 652.8035 0.03085246 0.04784538 0.04784538
##            FPR
## 5  0.004475151
## 10 0.009004466
## 15 0.013520481
## 20 0.018063361

Well, the numbers indicate that all the models are terrible. All metrics are scored rather poorly. True positives, false positives, false negatives, true negatives, precision, recall, true positive rate, and false positive rate are low for all models. Remember that these values are averages of the cross-validation. As such, for the “POPULAR” model when looking at the top five movies on average, the number of true positives was .3.

Even though the numbers are terrible the “POPULAR” model always performed the best. We can even view the ROC curve with the code below

plot(evlist,legend="topleft",annotate=T)

1.png

We can now determine individual recommendations. We first need to build a model using the POPULAR algorithm. Below is the code.

Rec1<-Recommender(movieRatings,method="POPULAR")
Rec1
## Recommender of type 'POPULAR' for 'realRatingMatrix' 
## learned using 9066 users.

We will now pull the top five recommendations for the first two raters and make a list. The numbers are the movie ids and not the actual titles

recommend<-predict(Rec1,movieRatings[1:5],n=5)
as(recommend,"list")
## $`1`
## [1] "78"  "95"  "544" "102" "4"  
## 
## $`2`
## [1] "242" "232" "294" "577" "95" 
## 
## $`3`
## [1] "654" "242" "30"  "232" "287"
## 
## $`4`
## [1] "564" "654" "242" "30"  "232"
## 
## $`5`
## [1] "242" "30"  "232" "287" "577"

Below we can see the specific score for a specific movie. The names of the movies come from the original “ratings” dataset.

rating<-predict(Rec1,movieRatings[1:5],type='ratings')
rating
## 5 x 671 rating matrix of class 'realRatingMatrix' with 2873 ratings.
movieresult<-as(rating,'matrix')[1:5,1:3]
colnames(movieresult)<-c("Toy Story","Jumanji","Grumpier Old Men")
movieresult
##   Toy Story  Jumanji Grumpier Old Men
## 1  2.859941 3.822666         3.724566
## 2  2.389340 3.352066         3.253965
## 3  2.148488 3.111213         3.013113
## 4  1.372087 2.334812         2.236711
## 5  2.255328 3.218054         3.119953

This is what the model thinks the person would rate the movie. It is the difference between this number and the actual one that the error is calculated. In addition, if someone did not rate a movie you would see an NA in that spot

Conclusion

This was a lot of work. However, with additional work, you can have your own recommendation system based on data that was collected.

Understanding Recommendation Engines

Recommendations engines are used to make predictions about what future users would like based on prior users suggestions. Whenever you provide numerical feedback on a product or services this information can be used to provide recommendations in the future.

This post will look at various ways in which recommendation engines derive their conclusions.

Ways of Recommending

There are two common ways to develop a recommendation engine in a machine learning context. These two ways are collaborative filtering and content-based. Content-based recommendations rely solely on the data provided by the user. A user develops a profile through their activity and the engine recommends products or services. The only problem is if there is little data on user poor recommendations are made.

Collaborative filtering is crowd-based recommendations. What this means the data of many is used to recommend to one. This bypasses the concern with a lack of data that can happen with content-based recommendations.

There are four common ways to develop collaborative filters and they are as follows

  • User-based collaborative filtering
  • Item-baed collaborative filtering
  • Singular value decomposition and Principal component  analysis

User-based Collaborative Filtering (UBCF)

UBCF uses k-nearest neighbor or some similarity measurement such as Pearson Correlation to predict the missing rating for a user. Once the number of neighbors is determined the algorithm calculates the average of the neighbors to predict the information for the user. The predicted value can be used to determine if a user will like a particular product or service

The predicted value can be used to determine if a user will like a particular product or service. Low values are not recommended while high values may be. A major weakness of UBCF is calculating the similarities of users requires keeping all the data in memory which is a computational challenge.

Item-based Collaborative Filtering (IBCF)

IBCF uses the similarity between items to make recomeendations. This is calculated with the same measures as before (Knn, Pearson correlation, etc.). After finding the most similar items, The algorithm will take the average from the individual user of the other items to predict recommendation the user would make for the unknown item.

In order to assure accuracy, it is necessary to have a huge number of items that can have the similarities calculated. This leads to the same computational problems mentioned earlier.

Singular Value Decomposition and Principal Component Analysis (SVD, PCA)

When the dataset is too big for the first two options. SVD or PCA could be an appropriate choice. What each of these two methods does in a simple way is reduce the dimensionality by making latent variables. Doing this reduces the computational effort as well as reduce noise in the data.

With SVD, we can reduce the data to a handful of factors. The remaining factors can be used to reproduce the original values which can then be used to predict missing values.

For PCA, items are combined in components and like items that load on the same component can be used to make predictions for an unknown data point for a user.

Conclusion

Recommendation engines play a critical part in generating sales for many companies. This post provided an insight into how they are created. Understanding this can allow you to develop recommendation engines based on data.

Political Intrigue and the English Language

Political intrigue has played a role in shaping the English language. In this post, we will look at one example from history of politics shaping the direction of the language.

Henry VII (1491-1547), King of England, had a major problem. He desperately needed a male heir for his kingdom. In order to do achieve, Henry VIII annulled several marriages or had his wife executed. This, of course, did not go over well with the leaders of Europe as nobles tended to marry nobles.

In order to have his first marriage canceled (to Catherine of Aragon) Henry VIII needed the permission of the Pope as divorce is normally not tolerated in Roman Catholicism. However, the Pope did not grant permission because he was facing political pressure from Catherine’s family who ruled over Spain.

Henry VIII was not one to take no for an answer. Stressed over his lack of a male heir and the fact that Catherine was already 40, he banished Catherine and married his second wife, Anne Boleyn. Much to the chagrin of his in-laws in Spain.

This resulted in the Pope excommunicating Henry VIII from the church. In response to this, Henry VIII started his own church or at least laid the foundation for Protestantism in England, with the help of German Lutheran Princes. It was this event that played a role in the development of the English language

Protestantism in England

Until the collapse of his first marriage, Henry VIII was a strong supporter of the Catholic Church and was even given the title of “Defender of the Faith.” However, he now took steps to flood England with Bibles in response to what he saw as a betrayal of the Catholic Church in denying him the annulment that he sought for his first marriage.

During this same time period of the mid-1500’s William Tyndale had just completed a translation of the New Testament from Greek to English. His translation was the first from the original language into English. Tyndale also completed about half of the Old Testament.

Although Tyndale was tried and burned for translating the Bible, within a few years of his death, Henry VIII was permitting the publishing of the Bible. Tyndale’s translation was combined with the work of Miles Coverdale to create the first complete English version of the Bible called the “Great Bible” in 1539.

With the bible in the hands of so many people the English language began to flow into religion and worship services. Such words as “beautiful”, “two-edged”, “landlady”, and “broken-heart” were established as new words. These words are taking for granted today but it was in translating the phrases from the biblical languages that we have these new forms of expression. How many men have called their woman beautiful? or how many people have complained about their landlady? This is possible thanks to Tyndale’s translating and Henry VIII’s support.

All this happen not necessarily for the right motives yet the foundation of the use of common tongues in worship as well as the audacity of translating scripture into other languages was established by a King seeking an heir.

Conclusion

Henry VIII was probably not the most religious man. He did not have any problem with divorcing or killing wives or with committing adultery. However, he used religion to achieve his political goals. By doing so, he inadvertently influenced the language of his country.

Clustering Mixed Data in R

One of the major problems with hierarchical and k-means clustering is that they cannot handle nominal data. The reality is that most data is mixed or a combination of both interval/ratio data and nominal/ordinal data.

One of many ways to deal with this problem is by using the Gower coefficient. This coefficient compares the pairwise cases in the data set and calculates a dissimilarity between. By dissimilar we mean the weighted mean of the variables in that row.

Once the dissimilarity calculations are completed using the gower coefficient (there are naturally other choices), you can then use regular kmeans clustering (there are also other choices) to find the traits of the various clusters. In this post, we will use the “MedExp” dataset from the “Ecdat” package. Our goal will be to cluster the mixed data into four clusters. Below is some initial code.

library(cluster);library(Ecdat);library(compareGroups)
data("MedExp")
str(MedExp)
## 'data.frame':    5574 obs. of  15 variables:
##  $ med     : num  62.1 0 27.8 290.6 0 ...
##  $ lc      : num  0 0 0 0 0 0 0 0 0 0 ...
##  $ idp     : Factor w/ 2 levels "no","yes": 2 2 2 2 2 2 2 2 1 1 ...
##  $ lpi     : num  6.91 6.91 6.91 6.91 6.11 ...
##  $ fmde    : num  0 0 0 0 0 0 0 0 0 0 ...
##  $ physlim : Factor w/ 2 levels "no","yes": 1 1 1 1 1 2 1 1 1 1 ...
##  $ ndisease: num  13.7 13.7 13.7 13.7 13.7 ...
##  $ health  : Factor w/ 4 levels "excellent","good",..: 2 1 1 2 2 2 2 1 2 2 ...
##  $ linc    : num  9.53 9.53 9.53 9.53 8.54 ...
##  $ lfam    : num  1.39 1.39 1.39 1.39 1.1 ...
##  $ educdec : num  12 12 12 12 12 12 12 12 9 9 ...
##  $ age     : num  43.9 17.6 15.5 44.1 14.5 ...
##  $ sex     : Factor w/ 2 levels "male","female": 1 1 2 2 2 2 2 1 2 2 ...
##  $ child   : Factor w/ 2 levels "no","yes": 1 2 2 1 2 2 1 1 2 1 ...
##  $ black   : Factor w/ 2 levels "yes","no": 2 2 2 2 2 2 2 2 2 2 ...

You can clearly see that our data is mixed with both numerical and factor variables. Therefore, the first thing we must do is calculate the gower coefficient for the dataset. This is done with the “daisy” function from the “cluster” package.

disMat<-daisy(MedExp,metric = "gower")

Now we can use the “kmeans” to make are clusters. This is possible because all the factor variables have been converted to a numerical value. We will set the number of clusters to 4. Below is the code.

set.seed(123)
mixedClusters<-kmeans(disMat, centers=4)

We can now look at a table of the clusters

table(mixedClusters$cluster)
## 
##    1    2    3    4 
## 1960 1342 1356  916

The groups seem reasonably balanced. We now need to add the results of the kmeans to the original dataset. Below is the code

MedExp$cluster<-mixedClusters$cluster

We now can built a descriptive table that will give us the proportions of each variable in each cluster. To do this we need to use the “compareGroups” function. We will then take the output of the “compareGroups” function and use it in the “createTable” function to get are actual descriptive stats.

group<-compareGroups(cluster~.,data=MedExp)
clustab<-createTable(group)
clustab
## 
## --------Summary descriptives table by 'cluster'---------
## 
## __________________________________________________________________________ 
##                    1            2            3            4      p.overall 
##                  N=1960       N=1342       N=1356       N=916              
## ¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯ 
## med            211 (1119)   68.2 (333)   269 (820)   83.8 (210)   <0.001   
## lc            4.07 (0.60)  4.05 (0.60)  0.04 (0.39)  0.03 (0.34)   0.000   
## idp:                                                              <0.001   
##     no        1289 (65.8%) 922 (68.7%)  1123 (82.8%) 781 (85.3%)           
##     yes       671 (34.2%)  420 (31.3%)  233 (17.2%)  135 (14.7%)           
## lpi           5.72 (1.94)  5.90 (1.73)  3.27 (2.91)  3.05 (2.96)  <0.001   
## fmde          6.82 (0.99)  6.93 (0.90)  0.00 (0.12)  0.00 (0.00)   0.000   
## physlim:                                                          <0.001   
##     no        1609 (82.1%) 1163 (86.7%) 1096 (80.8%) 789 (86.1%)           
##     yes       351 (17.9%)  179 (13.3%)  260 (19.2%)  127 (13.9%)           
## ndisease      11.5 (8.26)  10.2 (2.97)  12.2 (8.50)  10.6 (3.35)  <0.001   
## health:                                                           <0.001   
##     excellent 910 (46.4%)  880 (65.6%)  615 (45.4%)  612 (66.8%)           
##     good      828 (42.2%)  382 (28.5%)  563 (41.5%)  261 (28.5%)           
##     fair      183 (9.34%)   74 (5.51%)  137 (10.1%)  42 (4.59%)            
##     poor       39 (1.99%)   6 (0.45%)    41 (3.02%)   1 (0.11%)            
## linc          8.68 (1.22)  8.61 (1.37)  8.75 (1.17)  8.78 (1.06)   0.005   
## lfam          1.05 (0.57)  1.49 (0.34)  1.08 (0.58)  1.52 (0.35)  <0.001   
## educdec       12.1 (2.87)  11.8 (2.58)  12.0 (3.08)  11.8 (2.73)   0.005   
## age           36.5 (12.0)  9.26 (5.01)  37.0 (12.5)  9.29 (5.11)   0.000   
## sex:                                                              <0.001   
##     male      893 (45.6%)  686 (51.1%)  623 (45.9%)  482 (52.6%)           
##     female    1067 (54.4%) 656 (48.9%)  733 (54.1%)  434 (47.4%)           
## child:                                                             0.000   
##     no        1960 (100%)   0 (0.00%)   1356 (100%)   0 (0.00%)            
##     yes        0 (0.00%)   1342 (100%)   0 (0.00%)   916 (100%)            
## black:                                                            <0.001   
##     yes       1623 (82.8%) 986 (73.5%)  1148 (84.7%) 730 (79.7%)           
##     no        337 (17.2%)  356 (26.5%)  208 (15.3%)  186 (20.3%)           
## ¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯

The table speaks for itself. Results that utilize factor variables have proportions to them. For example, in cluster 1, 1289 people or 65.8% responded “no” that the have an individual deductible plan (idp). Numerical variables have the mean with the standard deviation in parentheses. For example, in cluster 1 the average family size was 1 with a standard deviation of 1.05 (lfam).

Conclusion

Mixed data can be partition into clusters with the help of the gower or another coefficient. In addition, kmeans is not the only way to cluster the data. There are other choices such as the partitioning around medoids. The example provided here simply serves as a basic introduction to this.

Item Indices for Multiple Choice Questions

Many teachers use multiple choice questions to assess students knowledge in a subject matter. This is especially true if the class is large and marking essays would provide to be impractical.

Even if best practices are used in making multiple choice exams it can still be difficult to know if the questions are doing the work they are supposed too. Fortunately, there are several quantitative measures that can be used to assess the quality of a multiple choice question.

This post will look at three ways that you can determine the quality of your multiple choice questions using quantitative means. These three items are

  • Item facility
  • Item discrimination
  • Distractor efficiency

Item Facility

Item facility measures the difficulty of a particular question. This is determined by the following formula

Item facility = Number of students who answer the item correctly
Total number of students who answered the item

This formula simply calculates the percentage of students who answered the question correctly. There is no boundary for a good or bad item facility score. Your goal should be to try and separate the high ability from the low ability students in your class with challenging items with a low item facility score. In addition, there should be several easier items with a high item facility score for the weaker students to support them as well as serve as warmups for the stronger students.

Item Discrimination

Item discrimination measures a questions ability to separate the strong students from the weak ones.

Item discrimination = # items correct of strong group – # items correct of weak group
1/2(total of two groups)

The first thing that needs to be done in order to calculate the item discrimination is to divide the class into three groups by rank. The top 1/3 is the strong group, the middle third is the average group and the bottom 1/3 is the weak group. The middle group is removed and you use the data on the strong and the weak to determine the item discrimination.

The results of the item discrimination range from zero (no discrimination) to 1 (perfect discrimination). There are no hard cutoff points for item discrimination. However, values near zero are generally removed while a range of values above that is expected on an exam.

Distractor Efficiency

Distractor efficiency looks at the individual responses that the students select in a multiple choice question. For example, if a multiple choice has four possible answers, there should be a reasonable distribution of students who picked the various possible answers.

The Distractor efficiency is tabulated by simply counting the which answer students select for each question. Again there are no hard rules for removal. However, if nobody selected a distractor it may not be a good one.

Conclusion

Assessing multiple choice questions becomes much more important as the size of class grows bigger and bigger or the test needs to be reused multiple times in various context. This information covered here is only an introduction to the much broader subject of item response theory.

Hierarchical Clustering in R

Hierarchical clustering is a form of unsupervised learning. What this means is that the data points lack any form of label and the purpose of the analysis is to generate labels for our data points. IN other words, we have no Y values in our data.

Hierarchical clustering is an agglomerative technique. This means that each data point starts as their own individual clusters and are merged over iterations. This is great for small datasets but is difficult to scale. In addition, you need to set the linkage which is used to place observations in different clusters. There are several choices (ward, complete, single, etc.) and the best choice depends on context.

In this post, we will make a hierarchical clustering analysis of the “MedExp” data from the “Ecdat” package. We are trying to identify distinct subgroups in the sample. The actual hierarchical cluster creates what is a called a dendrogram. Below is some initial code.

library(cluster);library(compareGroups);library(NbClust);library(HDclassif);library(sparcl);library(Ecdat)
data("MedExp")
str(MedExp)
## 'data.frame':    5574 obs. of  15 variables:
##  $ med     : num  62.1 0 27.8 290.6 0 ...
##  $ lc      : num  0 0 0 0 0 0 0 0 0 0 ...
##  $ idp     : Factor w/ 2 levels "no","yes": 2 2 2 2 2 2 2 2 1 1 ...
##  $ lpi     : num  6.91 6.91 6.91 6.91 6.11 ...
##  $ fmde    : num  0 0 0 0 0 0 0 0 0 0 ...
##  $ physlim : Factor w/ 2 levels "no","yes": 1 1 1 1 1 2 1 1 1 1 ...
##  $ ndisease: num  13.7 13.7 13.7 13.7 13.7 ...
##  $ health  : Factor w/ 4 levels "excellent","good",..: 2 1 1 2 2 2 2 1 2 2 ...
##  $ linc    : num  9.53 9.53 9.53 9.53 8.54 ...
##  $ lfam    : num  1.39 1.39 1.39 1.39 1.1 ...
##  $ educdec : num  12 12 12 12 12 12 12 12 9 9 ...
##  $ age     : num  43.9 17.6 15.5 44.1 14.5 ...
##  $ sex     : Factor w/ 2 levels "male","female": 1 1 2 2 2 2 2 1 2 2 ...
##  $ child   : Factor w/ 2 levels "no","yes": 1 2 2 1 2 2 1 1 2 1 ...
##  $ black   : Factor w/ 2 levels "yes","no": 2 2 2 2 2 2 2 2 2 2 ...

Currently, for the purposes of this post. The dataset is too big. IF we try to do the analysis with over 5500 observations it will take a long time. Therefore, we will only use the first 1000 observations. In addition, We need to remove factor variables as hierarchical clustering cannot analyze factor variables. Below is the code.

MedExp_small<-MedExp[1:1000,]
MedExp_small$sex<-NULL
MedExp_small$idp<-NULL
MedExp_small$child<-NULL
MedExp_small$black<-NULL
MedExp_small$physlim<-NULL
MedExp_small$health<-NULL

We now need to scale are data. This is important because different scales will cause different variables to have more or less influence on the results. Below is the code

MedExp_small_df<-as.data.frame(scale(MedExp_small))

We now need to determine how many clusters to create. There is no rule on this but we can use statistical analysis to help us. The “NbClust” package will conduct several different analysis to provide a suggested number of clusters to create. You have to set the distance, min/max number of clusters, the method, and the index. The graphs can be understood by looking for the bend or elbow in them. At this point is the best number of clusters.

numComplete<-NbClust(MedExp_small_df,distance = 'euclidean',min.nc = 2,max.nc = 8,method = 'ward.D2',index = c('all'))

1.png

## *** : The Hubert index is a graphical method of determining the number of clusters.
##                 In the plot of Hubert index, we seek a significant knee that corresponds to a 
##                 significant increase of the value of the measure i.e the significant peak in Hubert
##                 index second differences plot. 
## 

1.png

## *** : The D index is a graphical method of determining the number of clusters. 
##                 In the plot of D index, we seek a significant knee (the significant peak in Dindex
##                 second differences plot) that corresponds to a significant increase of the value of
##                 the measure. 
##  
## ******************************************************************* 
## * Among all indices:                                                
## * 7 proposed 2 as the best number of clusters 
## * 9 proposed 3 as the best number of clusters 
## * 6 proposed 6 as the best number of clusters 
## * 1 proposed 8 as the best number of clusters 
## 
##                    ***** Conclusion *****                            
##  
## * According to the majority rule, the best number of clusters is  3 
##  
##  
## *******************************************************************
numComplete$Best.nc
##                     KL       CH Hartigan     CCC    Scott      Marriot
## Number_clusters 2.0000   2.0000   6.0000  8.0000    3.000 3.000000e+00
## Value_Index     2.9814 292.0974  56.9262 28.4817 1800.873 4.127267e+24
##                   TrCovW   TraceW Friedman   Rubin Cindex     DB
## Number_clusters      6.0   6.0000   3.0000  6.0000  2.000 3.0000
## Value_Index     166569.3 265.6967   5.3929 -0.0913  0.112 1.0987
##                 Silhouette   Duda PseudoT2  Beale Ratkowsky     Ball
## Number_clusters     2.0000 2.0000   2.0000 2.0000    6.0000    3.000
## Value_Index         0.2809 0.9567  16.1209 0.2712    0.2707 1435.833
##                 PtBiserial Frey McClain   Dunn Hubert SDindex Dindex
## Number_clusters     6.0000    1   3.000 3.0000      0  3.0000      0
## Value_Index         0.4102   NA   0.622 0.1779      0  1.9507      0
##                   SDbw
## Number_clusters 3.0000
## Value_Index     0.5195

Simple majority indicates that three clusters is most appropriate. However, four clusters are probably just as good. Every time you do the analysis you will get slightly different results unless you set the seed.

To make our actual clusters we need to calculate the distances between clusters using the “dist” function while also specifying the way to calculate it. We will calculate distance using the “Euclidean” method. Then we will take the distance’s information and make the actual clustering using the ‘hclust’ function. Below is the code.

distance<-dist(MedExp_small_df,method = 'euclidean')
hiclust<-hclust(distance,method = 'ward.D2')

We can now plot the results. We will plot “hiclust” and set hang to -1 so this will place the observations at the bottom of the plot. Next, we use the “cutree” function to identify 4 clusters and store this in the “comp” variable. Lastly, we use the “ColorDendrogram” function to highlight are actual clusters.

plot(hiclust,hang=-1, labels=F)
comp<-cutree(hiclust,4) ColorDendrogram(hiclust,y=comp,branchlength = 100)

1.jpeg

We can also create some descriptive stats such as the number of observations per cluster.

table(comp)
## comp
##   1   2   3   4 
## 439 203 357   1

We can also make a table that looks at the descriptive stats by cluster by using the “aggregate” function.

aggregate(MedExp_small_df,list(comp),mean)
##   Group.1         med         lc        lpi       fmde     ndisease
## 1       1  0.01355537 -0.7644175  0.2721403 -0.7498859  0.048977122
## 2       2 -0.06470294 -0.5358340 -1.7100649 -0.6703288 -0.105004408
## 3       3 -0.06018129  1.2405612  0.6362697  1.3001820 -0.002099968
## 4       4 28.66860936  1.4732183  0.5252898  1.1117244  0.564626907
##          linc        lfam    educdec         age
## 1  0.12531718 -0.08861109  0.1149516  0.12754008
## 2 -0.44435225  0.22404456 -0.3767211 -0.22681535
## 3  0.09804031 -0.01182114  0.0700381 -0.02765987
## 4  0.18887531 -2.36063161  1.0070155 -0.07200553

Cluster 1 is the most educated (‘educdec’). Cluster 2 stands out as having higher medical cost (‘med’), chronic disease (‘ndisease’) and age. Cluster 3 had the lowest annual incentive payment (‘lpi’). Cluster 4 had the highest coinsurance rate (‘lc’). You can make boxplots of each of the stats above. Below is just an example of age by cluster.

MedExp_small_df$cluster<-comp
boxplot(age~cluster,MedExp_small_df)

1.jpeg

Conclusion

Hierarchical clustering is one way in which to provide labels for data that does not have labels. The main challenge is determining how many clusters to create. However, this can be dealt with through using recommendations that come from various functions in R.