Monthly Archives: October 2017

Conversational Analysis: Questions & Responses

Advertisements

Conversational analysis (CA) is the study of social interactions in everyday life. In this post, we will look at how questions and responses are categorized in CA.

Questions

In CA, there are generally three types of questions and they are as follows…

  • Identification question
  • Polarity question
  • Confirmation question

Identification Question

Identification questions are questions that employees one of the five W’s (who, what, where, when, why). The response can be opened or closed-ended. An example is below

Where are the keys?

Polarity Question

A polarity question is a question the calls for a yes/no response.

Can you come to work tomorrow?

Confirmation Question

Similiar to the polarity question, a confirmation question is a question that is seeking to gather support for something the speaker already said.

Didn’t Sam go to the store already?

This question is seeking an affirmative yes.

Responses

There are also several ways in which people respond to a question. Below is a list of common ways.

  • Comply
  • Supply
  • Imply
  • Evade
  • Disclaim

Comply

Complying means give a clear direct answer to a question. Below is an example

A: What time is it?
B: 6:30pm

Supply

Supplying is the act of giving a partial response, that is often irrelevant and fails to answer the question.

A: Is this your dog?
B: Well…I do feed it once in awhile

In the example above, person A asks a clear question. However, person B states what they do for the dog (feed it) rather than indicate if the dog belongs to them. Feeding the dog is irrelevant to ownership.

Imply

Implying is providing information indirectly to answer a question.

A: What time do you want to leave?
B: Not too late

The response from person B does not indicate any sort of specific time to leave. This leaves it up to person A to determine what is meant by “too late.”

Disclaim

Disclaiming is the person stating they do not n]know the answer.

A: Where are the keys?
B: I don’t know

Evade

Evading is the act of answering with really answering the question

A: Where is the car
B: David needed to go shopping

In the example above, person B never states where the car is. Rather, they share what someone is doing with the car. By doing this, the speaker never shares where the car is.

Conclusions

The interaction of a question and response can be interesting if it is examined more closely from a sociolinguistic perspective. The categories provided here can support the deeper analysis of conversation.

Conversational Analysis

Advertisements

Conversational analysis is a tool used by sociolinguist to examine dialog between two or more people. The analysis can include such aspects as social factors, social dimensions, and other characteristics.

One unique tool in conversational analysis identifying adjacency pairs. Adjacency pairs are two-part utterances in which the second speaker is replying to something the first speaker said. In this post, we will look at the following examples of adjacency pairs.

  • Request-agreement
  • Question-Answer
  • Assessment-Agreement
  • Greeting-Greeting
  • Compliment-Acceptance
  • Conversational Concluder
  • Complaint-Apology
  • Blame-Denial
  • Threat-Counterthreat
  • Warning-Acknowledgement
  • Offer-Acceptance

Request-Agreement

Request involves asking someone to do something and agreement indicates that the person will do it. Below is an example

A: Could you open the window?
B: No problem

Question-Answer

One person request information from another. THis is different from request agreement because there is no need to agree. Below is an example

A: Where are you from?
B: I am from Laos

Assessment-Agreement

Assessment seeks an opinion from someone and agreement is a positive position on the subject. The example is below

A: Do you like the food?
B: Yeah, it taste great!

Greeting-Greeting

Two people say hello to one another.

A: Hello
B: Hello

Compliment-Acceptance

One person commends something about the other who shows appreciation for the comment.

A: I really like your shoes
B: Thank you

Conversational Concluder

This is a comment that singles the end of a conversation.

A: Goodbye
B: See you later

Complaint-Apology

One person indicates they are not happy with something and the other person express regret over this.

A: The food is too spicy
B: We’re so sorry

Blame-Denial

One person accuses another who tries to defend himself.

A: You lost the phone?
B: No I didn’t!

Threat-Counterthreat

Two people mutually resist each other.

A: Sit down or I will call your parents!
B: Make me

Warning-Acknowledgement

One person issues a threat or danger and the other indicates they understand

A: Look both ways before crossing the street
B: No problem

Offer-Acceptance

One person gives something and the other person shows appreciation

A: Here’s the money
B: Thank you so much

Conclusion

These kinds of conversational pairs appear whenever people talk. For the average person, this is not important. However, when trying to look at the context of a conversation tot understanding what is affecting the way people are speaking understanding and identifying adjacency pairs can be useful.

The Structure of Academic Writing

Advertisements

“The book is boring.” This is a common complaint many lecturers receive from students about the assigned reading in a class. Although this is discouraging to hear it is usually a cry for help. What the student is really saying is that they cannot understand what they are reading. Yes, the read it but they didn’t get it.

The missing ingredient for students to appreciate academic reading is to understand the structure of academic writing. Lecturers forget that students are not scholars and thus do not quite understand how scholars organize their writing. If students knew this they would no longer be students. Therefore, lecturers need to help students not only understand the ideas of a book but the actual structure of how those ideas are framed in a textbook.

This post will try to explain the structure of academic writing in a general sense.

How it Works

Below is a brief outline of a common structure for an academic textbook.

  • Preface
    • Purpose of the book
    • Big themes of the book (chapters)
  • Chapter
    • Objectives/headings provide themes of the chapter
  • Headings
    • Provides theme of a section of a chapter

Here is what I consider to be a major secret of writing. The structure is highly redundant but at different levels of abstraction. The preface, chapter, and headings of a book are all the same in terms of purpose but at different levels of scope. The preface is the biggest picture you can get of the text. It’s similar to the map of a country. The chapter zooms in somewhat and is similar to the map of a city. Lastly, the headings within a chapter are similar to have a neighborhood map of a city.

The point is that academic writing is highly structured and organized. Students often think a text is boring. However, when they see the structure, they may not fall in love with academics but at least they will understand what the author is trying to say. A student must see the structure in order to appreciate the details.

Another way to look at this is as follows.

  • The paragraphs of a heading support the heading
  • The headings of a chapter support the chapter
  • The chapters of a book support the title of the book

A book is like a body, you have cells, you have tissues, and you have organs. Each is an abstraction of a higher level. Cells combine to make tissue, tissues combine to make organs, etc. This structure is how academic writing takes place.

The goal of academic writing is not to be entertaining. That role is normally set aside for fiction writing. Since most students enjoy entertainment they expect academic writing to use the same formula of fun. However, few authors place fun as one of the purposes in their preface. This yet another disconnect between students and textbooks.

Conclusion

Academic writing is repetitive in terms of its structure. Each sub-section supports a higher section in the book. This repetitive structure is probably one aspect of academic writing students find so boring. However, this repetitive nature makes the write highly efficient in terms of understanding giving that the reader is aware of this.

Understanding the Preface of a Textbook

Advertisements

A major problem students have in school is understanding what they read. However, the problem often is not reading in itself. By this I mean the student know what they read but they do not know what it means. In other words, they will read the text but cannot explain what the text was about.

There are several practical things a student can do to overcome this problem without having to make significant changes to their study habits. Some of the strategies that they can use involve looking at the structure of how the writing is developed. Examples of this include the following.

  • Reading the preface
  • Reading the chapter titles
  • Reading the chapter objectives
  • Reading the headings in the chapters
  • Make some questions
  • Now read & answer the questions

In this post, we will look at the benefits of reading the preface to a book.

Reading the Preface

When students are assigned reading they often skip straight to page one and start reading. This means they have no idea what the text is about or even what the chapter will be about. This is the same as jumping in your car to drive somewhere without directions. You might get there eventually but often you just end up lost.

One of the first things a student should do is read the preface of a book. The preface gives you some of the following information

  • Information about the author
  • The purpose of the book
  • The audience of the book
  • The major themes of the text
  • Assumptions

Knowing the purpose of the text is beneficial to understanding the author’s viewpoint. This is often more important in graduate studies than in undergrad.

Knowing the main themes of the book helps from a psychological perspective as well. These themes serve as mental hooks in your mind in which you can hang the details of the chapters that you will read. It is critical to see the overview and big picture of the text so that you have a framework in which to place the ideas of the chapters you will read.

Many books do not have a preface. Instead what they often do is make chapter one the “introduction” and include all the aspects of the preface in the first chapter. Both strategies are fine. However, it is common for teachers to skip the introduction chapter in order to get straight to the “content.” This is fast but can inhibit understanding of the text.

There are also usually an explanation of assumptions. The assumptions serve to tell the reader what they should already know as well as the biases of the author. This is useful as it communicates the position the author takes from the outset with the readers trying to infer this.

Conclusion

The preface serves the purpose of introducing the reader to the text. One of the goals if the preface is to convince the reader why they should read the book. It provides the big picture of the text, shares about the author, and indicates who the book is for, as well as sharing the author’s viewpoint.

Understanding Academic Text

Advertisements

Understanding academic text is possible through making some minor adjustment to one’s reading style. In this post, we will look at the following ideas for improving academic reading comprehension.

  • Reading the chapter titles
  • Reading the chapter objectives
  • Reading the headings in the chapters
  • Examine the Visuals
  • Make some questions
  • Now read & answer the questions

Read the Chapter Titles

You read the chapter title for the same reason as the preface. It gives you the big picture from which you develop a framework for placing the ideas of the author. I am always amazed how many times I ask my students what the title of the chapter is and they have no clue. This is because they were so determined to read that they never set things in place to understand.

For ESL readers, it is critical that they know the meaning of every word in the title. Again this has to do with the importance of the title for shaping the direction of the reading. If the student gets lost in the details this is where teaching support is there for. However, if they have no idea what the chapter is about there is little even the be3st teacher can do.

Read Chapter Objectives

The objectives of a chapter are a promise of what the author will write about. The student needs to know what the promises are so they know what to expect. This is similar to driving somewhere and expecting to see certain landmarks along the way. When you see these landmarks you know you are getting close to the destination.

The objectives provided the big picture of the chapter in a way that the preface provides the big picture of the entire book. Again, it is common for students to skip this aspect of reading comprehension.

Read the Chapter Headings

By now you probably know why to read the chapter headings. If not, it is because the chapter headings tell the student what to expect in a particular section of the chapter. They serve as a local landmark or a localized purpose.

For an extremely efficient (or perhaps lazy) writer, the objectives and the headings of a chapter will be exactly the same with perhaps slight rewording. This is extremely beneficial for readers because not only do they see the objectives at the beginning but the see them stated again as headings in the chapter.

Examine the Visuals

Visuals are used to illustrate ideas in the text. For now, the student simply wants to glance at them. Being familiar with the visuals now will be useful when the student wants to understand them when reading.

When looking at a visual, here are some things to look for

  • Title
  • author
  • date
  • what’s being measured
  • scale (units of measurement)

For an initial superficial glance, this is more than enough

Make Questions, Read, and Answer 

After examining the text, the student should have questions about what the text is about. Now they should write down what they want to know after examining the various characteristics of the chapter and then they begin to read so they can answer their questions

Examine End of the Chapter Tools

After reading the chapter, many authors provide some sort of study tools at the end. I find it most useful to read the chapter before looking too closely at this information. The reason for this is that the summary and questions at the end indicate what the author thinks is important about the chapter. It’s hard to appreciate this if you did not read the chapter yet.

Knowing what is happening at the end of the chapter helps in reinforcing what you read. You can quiz yourself about the information and use this information to prepare for any examines.

Conclusion

Previewing a chapter is a strategy for understanding a chapter. The ideas a student reads about must have a framework in which the pieces can fit. This framework can be developed through examining the chapter before reading it in detail.

Linear Regression vs Bayesian Regression

Advertisements

In this post, we are going to look at Bayesian regression. In particular, we will compare the results of ordinary least squares regression with Bayesian regression.

Bayesian Statistics

Bayesian statistics involves the use of probabilities rather than frequencies when addressing uncertainty. This allows you to determine the distribution of the model parameters and not only the values. This is done through averaging over the model parameters through marginalizing the joint probability distribution.

Linear Regression

We will now develop our two models. The first model will be a normal regression and the second a Bayesian model. We will be looking at factors that affect the tax rate of homes in the “Hedonic” dataset in the “Ecdat” package. We will load our packages and partition our data. Below is some initial code

library(ISLR);library(caret);library(arm);library(Ecdat);library(gridExtra)
data("Hedonic")
inTrain<-createDataPartition(y=Hedonic$tax,p=0.7, list=FALSE)
trainingset <- Hedonic[inTrain, ]
testingset <- Hedonic[-inTrain, ]
str(Hedonic)
## 'data.frame':    506 obs. of  15 variables:
##  $ mv     : num  10.09 9.98 10.45 10.42 10.5 ...
##  $ crim   : num  0.00632 0.02731 0.0273 0.03237 0.06905 ...
##  $ zn     : num  18 0 0 0 0 0 12.5 12.5 12.5 12.5 ...
##  $ indus  : num  2.31 7.07 7.07 2.18 2.18 ...
##  $ chas   : Factor w/ 2 levels "no","yes": 1 1 1 1 1 1 1 1 1 1 ...
##  $ nox    : num  28.9 22 22 21 21 ...
##  $ rm     : num  43.2 41.2 51.6 49 51.1 ...
##  $ age    : num  65.2 78.9 61.1 45.8 54.2 ...
##  $ dis    : num  1.41 1.6 1.6 1.8 1.8 ...
##  $ rad    : num  0 0.693 0.693 1.099 1.099 ...
##  $ tax    : int  296 242 242 222 222 222 311 311 311 311 ...
##  $ ptratio: num  15.3 17.8 17.8 18.7 18.7 ...
##  $ blacks : num  0.397 0.397 0.393 0.395 0.397 ...
##  $ lstat  : num  -3 -2.39 -3.21 -3.53 -2.93 ...
##  $ townid : int  1 2 2 3 3 3 4 4 4 4 ...

We will now create our regression model

ols.reg<-lm(tax~.,trainingset)
summary(ols.reg)
## 
## Call:
## lm(formula = tax ~ ., data = trainingset)
## 
## Residuals:
##      Min       1Q   Median       3Q      Max 
## -180.898  -35.276    2.731   33.574  200.308 
## 
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)    
## (Intercept) 305.1928   192.3024   1.587  0.11343    
## mv          -41.8746    18.8490  -2.222  0.02697 *  
## crim          0.3068     0.6068   0.506  0.61339    
## zn            1.3278     0.2006   6.618 1.42e-10 ***
## indus         7.0685     0.8786   8.045 1.44e-14 ***
## chasyes     -17.0506    15.1883  -1.123  0.26239    
## nox           0.7005     0.4797   1.460  0.14518    
## rm           -0.1840     0.5875  -0.313  0.75431    
## age           0.3054     0.2265   1.349  0.17831    
## dis          -7.4484    14.4654  -0.515  0.60695    
## rad          98.9580     6.0964  16.232  < 2e-16 ***
## ptratio       6.8961     2.1657   3.184  0.00158 ** 
## blacks      -29.6389    45.0043  -0.659  0.51061    
## lstat       -18.6637    12.4674  -1.497  0.13532    
## townid        1.1142     0.1649   6.758 6.07e-11 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 63.72 on 341 degrees of freedom
## Multiple R-squared:  0.8653, Adjusted R-squared:  0.8597 
## F-statistic: 156.4 on 14 and 341 DF,  p-value: < 2.2e-16

The model does a reasonable job. Next, we will do our prediction and compare the results with the test set using correlation, summary statistics, and the mean absolute error. In the code below, we use the “predict.lm” function and include the arguments “interval” for the prediction as well as “se.fit” for the standard error

ols.regTest<-predict.lm(ols.reg,testingset,interval = 'prediction',se.fit = T)

Below is the code for the correlation, summary stats, and mean absolute error. For MAE, we need to create a function.

cor(testingset$tax,ols.regTest$fit[,1])
## [1] 0.9313795
summary(ols.regTest$fit[,1])
##    Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
##   144.7   288.3   347.6   399.4   518.4   684.1
summary(trainingset$tax)
##    Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
##   188.0   279.0   330.0   410.4   666.0   711.0
MAE<-function(actual, predicted){
        mean(abs(actual-predicted))
}
MAE(ols.regTest$fit[,1], testingset$tax)
## [1] 41.07212

The correlation is excellent. The summary stats are similar and the error is not unreasonable. Below is a plot of the actual and predicted values

We now need to combine some data into one dataframe. In particular, we need the following actual dependent variable results predicted dependent variable results The upper confidence value of the prediction THe lower confidence value of the prediction

The code is below

yout.ols <- as.data.frame(cbind(testingset$tax,ols.regTest$fit))
ols.upr <- yout.ols$upr
ols.lwr <- yout.ols$lwr

We can now plot this

p.ols <- ggplot(data = yout.ols, aes(x = testingset$tax, y = ols.regTest$fit[,1])) + geom_point() + ggtitle("Ordinary Regression") + labs(x = "Actual", y = "Predicted")
p.ols + geom_errorbar(ymin = ols.lwr, ymax = ols.upr)

You can see the strong linear relationship. However, the confidence intervals are rather wide. Let’s see how Bayes does.

Bayes Regression

Bayes regression uses the “bayesglm” function from the “arm” package. We need to set the family to “gaussian” and the link to “identity”. In addition, we have to set the “prior.df” (prior degrees of freedom) to infinity as this indicates we want a normal distribution

bayes.reg<-bayesglm(tax~.,family=gaussian(link=identity),trainingset,prior.df = Inf)
bayes.regTest<-predict.glm(bayes.reg,newdata = testingset,se.fit = T)
cor(testingset$tax,bayes.regTest$fit)
## [1] 0.9313793
summary(bayes.regTest$fit)
##    Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
##   144.7   288.3   347.5   399.4   518.4   684.1
summary(trainingset$tax)
##    Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
##   188.0   279.0   330.0   410.4   666.0   711.0
MAE(bayes.regTest$fit, testingset$tax)
## [1] 41.07352

The numbers are essentially the same. This leads to the question of what is the benefit of Bayesian regression? The answer is in the confidence intervals. Next, we will calculate the confidence intervals for the Bayesian model.

yout.bayes <- as.data.frame(cbind(testingset$tax,bayes.regTest$fit))
names(yout.bayes) <- c("tax", "fit")
critval <- 1.96 #approx for 95% CI
bayes.upr <- bayes.regTest$fit + critval * bayes.regTest$se.fit
bayes.lwr <- bayes.regTest$fit - critval * bayes.regTest$se.fit

We now create our Bayesian regression plot.

p.bayes <- ggplot(data = yout.bayes, aes(x = yout.bayes$tax, y = yout.bayes$fit)) + geom_point() + ggtitle("Bayesian Regression Prediction") + labs(x = "Actual", y = "Predicted")

Lastly, we display both plots as a comparison.

ols.plot <-  p.ols + geom_errorbar(ymin = ols.lwr, ymax = ols.upr)
bayes.plot <-  p.bayes + geom_errorbar(ymin = bayes.lwr, ymax = bayes.upr)
grid.arrange(ols.plot,bayes.plot,ncol=2)

As you can see, the Bayesian approach gives much more compact confidence intervals. This is because the Bayesian approach a distribution of parameters is calculated from a posterior distribution. These values are then averaged to get the final prediction that appears on the plot. This reduces the variance and strengthens the confidence we can have in each individual example.

Review of “Usborne World of Animals”

Advertisements

The Usborne World of Animals was written by Susanna Davidson and Mike Unwin (pp. 128).

The Summary

This book is about animals and how they live in the world. The book has ten sections. The first section covers topics about how animals live in general. Some of the topics in this section include how animals move, eat, smell, taste, touch, hide, etc. The next 8 sections

The next 8 sections cover different animals in different regions of the world. Examples include Toucans in South America, Bears in North America, Gorillas in Africa, Otters in Europe, Panda Bears in Asia, Kangaroos in Australia, and even Elephant Seals in Antartica.

The Good

This book is full of rich photographs and even illustrations that provide additional learning. The photos depict animals in daily life such as a tiger running, polar bears playing, anteaters searching for food, bats sleeping, monkeys jumping, etc. Children will enjoy the pictures tremendously.

The text is fairly readable. The font is normally large with smaller text being of less importance. There is even a little geography mixed as the book organized the animals based on the region they are from. At the beginning of the section is a map showing where on the continent the animals were from.

The Bad

There is little to criticize about this book. One minor problem is the maps are drawn way out of scale. Asia, in particular, looks really strange. Of course, this is not a geography book but it is distracting somewhat in the learning experience.

Another small complaint could be the superficial nature of the text. There are more animals than there is time to really go deeply into. Again, for an expert this m ay be troublesome but this may not be much of a problem for the typical child.

The Recommendation

This text is 5/5 stars. As a teacher, you can use it for reading to your students or add it to your library for personal reading. The photos and colors will provide a vided learning experience for students for years to come.

Common Task in Machine Learning

Advertisements

Machine learning is used for a variety of task today with a multitude of algorithms that can each do one or more of these tasks well. In this post, we will look at some of the most common tasks that machine learning algorithms perform. In particular, we will look at the following task.

  1. Regression
  2. Classification
  3. Forecasting
  4. Clustering
  5. Association rules
  6. Dimension reduction

Numbers 1-3 are examples of supervised learning, which is learning that involves a dependent variable. Numbers 4-6 are unsupervised which is learning that does not involve a clearly labeled dependent variable.

Regression

Regression involves understanding the relationship between a continuous dependent variable and categorical and continuous independent variables. Understanding this relationship allows for numeric prediction of the dependent continuous variable.

Example algorithms for regression include linear regression, numeric prediction random forest as well as support vector machines and artificial neural networks.

Classification

Classification involves the use of a categorical dependent variable with continuous and or categorical independent variables. The purpose is to classify examples into the groups in the dependent variable.

Examples of this are logisitic regression as well as all the algorithms mentioned in regression. Many algorithms can do both regression and classification.

Forecasting

Forecasting is similar to regression. However, the difference is that the data is a time series. The goal remains the same of predicting future outcomes based on current available data. As such, a slightly different approach is needed because of the type of data involved.

Common algorithms for forecasting is ARIMA even artificial neural networks.

Clustering

Clustering involves grouping together items that are similar in a dataset. This is done by detecting patterns in the data. The problem is that the number of clusters needed is usually no known in advanced which leads to a trial and error approach if there is no other theoretical support.

Common clustering algorithms include k-means and hierarchical clustering. Latent Dirichlet allocation is used often in text mining applications.

Association Rules

Associations rules find items that occur together in a dataset. A common application of association rules is market basket analysis.

Common algorithms include Apriori and frequent pattern matching algorithm.

Dimension Reduction

Dimension reduction involves combining several redundant features into one or more components that capture the majority of the variance. Reducing the number of features can increase the speed of the computation as well as reduce the risk of overfitting.

In machine learning, principal component analysis is often used for dimension reduction. However, factor analysis is sometimes used as well.

Conclusion

In machine learning, there is always an appropriate tool for the job. This post provided insight into the main task of machine learning as well as the algorithm for the situation.

Terms Related to Language

Advertisements

This post will examine different uses of the word language. There are several different ways that this word can be defined. We will look at the following terms for language.

  • Vernacular
  • Standard
  • National
  • Official
  • Lingua Franca

Vernacular Language

The term vernacular language can mean many different things. It can mean a language that is not standardized or a language that is not the standard language of a nation. Generally, a vernacular language is a language that lacks official status in a country.

Standard Language

A standard language is a language that has been codified. By this, it is meant that the language has dictionaries and other grammatical sources that describe and even prescribe the use of the language.

Most languages have experienced codification. However, codification is just one part of being a standard language. A language must also be perceived of as prestigious and serve a high function.

By prestigious it is meant that the language has influence in a community. For example, Japanese is a prestigious language in Japan. By high function, it is meant that the language is used in official settings such as government, business, etc., which Japanese is used for.

National Language

A national language is a language used for political and cultural reasons to unite a people. Many countries that have a huge number of languages and ethnic groups will select one language as a way to forge an identity. For example, in the Philippines, the national language is Tagalog even though hundreds of other languages are spoken.

In Myanmar, Burmese is the national language even though dozens of other languages are spoken. The selection of the language is political motivate with the dominant group imposing their language on others.

Official Language

An official language is the language of government business. Many former colonized nations will still use an official language that comes from the people who colonized them. This is especially true in African countries such as Ivory Coast and Chad which use French as their official language despite having other indigenous languages available.

Lingua Franca

A lingua franca is a language that serves as a vehicle of communication between two language groups whose mother tongues are different. For example, English is often the de facto lingua franca of people who do not speak the same language.

Multiple Categories

A language can fit into more than one of the definitions above. For example, English is a vernacular language in many countries such as Thailand and Malaysia. However, English is not considered a vernacular language in the United States.

To make things more confusing. English is the language of the United States but it is neither the National or Official Language as this has never been legislated. Yet English is a standard language as it has been codified and meets the other criteria for standardization.

Currently, English is viewed by many as an international Lingua Franca with a strong influence on the world today.

Lastly, a language can be in more than one category. Thai is the official, national, and standard language of Thailand.

Conclusion

Language is a term that is used that can also have many meanings. In this post, we looked at how there are different ways to see this word.

Review of “Tut’s Mummy: Lost…and Found”

Advertisements

This post is a review of the book Tut’s Mummy: Lost…and Found by Judy Donnelly (pp. 48).

The Summary

This book covers burial of King Tut along with the eventual discovery of his body several

centuries later. The illustrator draws the preservation of the body, funeral procession, and the burial of the mummy. Intersperse are actual artifacts from the tomb such as a game board and necklace.

The book then moves forward several centuries and explains the discovery of King Tut by Howard Carter. There are several more pictures of artifacts as well as a diagram of the burial chamber of King Tut

The Good

This is a good informative read for younger children. The illustrations support the text yet the book is still text driven. What I mean by this is that you can’t just look at the pictures to understand the book. The text and illustrations work together.

There are also several photographs from the time of the discovery of King Tut’s tomb. The photos help in establishing the authenticity of the text. In addition, the text moves at a good pace and never gets bog down in boring details.

The Bad

There is little to complain about in this text. It provides additional details about King Tut’s life and burial that are probably missing from a standard history textbook.

The Recommendation

This book deserves 4/5. It provides excellent supplementary material on a specific part of history. The writing style is brisk and the illustrations are excellent. Add this to our library if you work with elementary age children

Code -Switching & Lexical Borrowing

Advertisements

Code-switching involves a speaker changing languages as they talk. This post will explore some of the reasons behind why people code-switch. In addition, we will look at lexical borrowing and its use in communication

Code-Switching

Code-switching is most commonly caused by social factors and social dimensions of pragmatics. By social factors, it is meant the who, what, where, when and why of communication. Social dimensions involve distance, status, formality, emotions, referential traits.

For example, two people from the same ethnicity may briefly switch to their language to say hello to each other before returning to English. The “what” is two people meeting each other and the use of the mother-tongue indicates high intimacy with each other.

The topic of discussion can also lead to code-switching. For example, I have commonly seen students with the same mother-tongue switch to using English when discussion academic subjects. This may be because their academic studies use the English language as a medium of instruction.

Switching can also take place for emotional reasons. For example, a person may switch languages to communicate anger such as a mother switching to the mother-tongue to scold their child.

There is a special type of code-switching called metaphorical switching. This type of switching happens when the speaker switches languages for symbolic reasons. For example, when I person agrees about something they use their mother tongue. However, when they disagree about something they may switch to English. This switching back and forth is to indicate their opinion on a matter without having to express it too directly.

Lexical Borrowing

Lexical borrowing is used when a person takes a word from one language to replace an unknown word in a different language. Code-switching happens at the sentence level whereas lexical borrowing happens at the individual word level.

Borrowing does not always happen because of a poor memory. Another reason for lexical borrowing is that some words do not translate into another language. This forces the speaker to borrow. For example, many langauges do not have a word for computer or internet. Therefore, these words are borrowed when speaking.

Perceptions

Often, people have no idea that the are code-switching or even borrowing. However, those who are conscious of it usually have a negative attitude towards it. The criticism of code-switching often involves complaints of how it destroys both languages. However, it takes a unique mastery of both languages to effectively code-switch or borrowing lexically.

Conclusion

Code-switching and lexical borrowing are characteristics of communication. For those who want to prescribe language, it may be frustrating to watch two languages being mixed together. However, from a descriptive perspective, this is a natural result of language interaction.

Absolute vs Relative Grading

Advertisements

Grading is a concept that almost no two teachers agree upon. Some believe in including effort while others believe only performance should be considered. Some believe in many A’s while others believe A’s should be rare.

In this post, we will look at absolute and relative grading and how these two ideas can be applied in an academic setting.

Absolute Grading

Absolute grading involves the teacher pre-specifying the standards for performance. For example, a common absolute grading scale would be

A = 90-100
B = 80-89
C = 70-79
D = 60-69
F = 0-59

Whatever score the student earns is their grade.  There are no adjustments made to their grade. For example, if everyone gets a score between 90-100 everyone gets an “A” or if everyone gets below 59 everyone gets an “F.” The absolute nature of absolute grading makes it inflexible and constraining for unique situations.

Relative Grading

Relative grading allows for the teacher to interpret the results of an assessment and determine grades based on student performance. One example of this is grading “on the curve.” In this approach, the grades of an assessment are forced to fit a “bell curve” no matter what the distribution is. A hard grade to the curve would look as follows.

A = Top 10% of students
B = Next 25% of students
C = Middle 30% of students
D = Next 25% of students
F = Bottom 10% of students

As such, if the entire class had a score on an exam between 90-100% using relative grading would still create a distribution that is balanced. Whether this is fair or not is another discussion.

Some teachers will divide the class grades by quartiles with a spread from A-D. Others will use the highest grade achieved by an individual student as the A grade and mark other students based on the performance of the best student.

There are times when institutions would set the policy for relative grading. For example, in a graduate school, you may see the following grading scale.

A = top 60%
B = next 30%
C = next 10%
D, F = Should never happen

the philosophy behind this is that in graduate school all the students are excellent so the grades should be better. Earning a “C” is the same as earning an “F.” Earning a “D” or “F” often leads to removal from the program.

Grading Philosophy

There will never be agreement on how to grade. Coming from different backgrounds makes this challenging. For example, some cultures believe that the teacher should prepare the students for exams while others do not. Some cultures believe in self-assessment while others do not. Some cultures believe in a massive summative exam while others do not

In addition, many believe that grades are objective when there is little evidence to support this in academic research. A teacher who thinks students are low performers gives out such grades even if the students are high achievers.

As such, the most reasonable approach is for a school to discuss grading policies and lay out the school’s approach to grading to reduce confusion even if it does not reduce frustration.

Review of “Peoples of the World”

Advertisements

This post is a review of the book Peoples of the World by Roma Trundle (pp. 32).

The Summary

This book exposes the reader to various aspects of culture have they are addressed by many different people groups. Topics that are addressed include money, food, clothing, craft, religion, language, and music.

For each of these cultural topics, several people groups provide examples of how they address this. For example, for the cultural topic of money, different examples of money our given. You get to see the Russian rouble, Malaysian sens, and Greek drachmas. There are even examples what is not traditional view as money in the west such as the use of salt for money as well as bartering.

This pattern of an aspect of culture followed by examples is repeated throughout the book.

The Good

This book provides a great deal of exposure to cultures that most students are not familiar with. The illustrations are adequate. There are also activities every few pages for the students. Examples include how to wear a sarong, sara, turban, how to make wax pictures, as well as how to make a pinata.

The Bad 

There is a lot of small text on the pages. This makes the book unreadable for younger students. In addition, there are no learning tools or support. This leaves it up to the teacher to determine how to scaffold this material for their students. For younger teachers, this can be much more challenging.

The Recommendation

 I give this book 2/5. There is just a lack of “wow” when looking at this text. Nothing was done to make this book stand out from the crowd. It’s worthy of the library but not valuable in terms of teaching and instruction. Let the kids enjoy the pictures and for the more academically incline to actual read it.