Tag Archives: assessment

Absolute vs Relative Grading

Grading is a concept that almost no two teachers agree upon. Some believe in including effort while others believe only performance should be considered. Some believe in many A’s while others believe A’s should be rare.

In this post, we will look at absolute and relative grading and how these two ideas can be applied in an academic setting.

Absolute Grading

Absolute grading involves the teacher pre-specifying the standards for performance. For example, a common absolute grading scale would be

A = 90-100
B = 80-89
C = 70-79
D = 60-69
F = 0-59

Whatever score the student earns is their grade.  There are no adjustments made to their grade. For example, if everyone gets a score between 90-100 everyone gets an “A” or if everyone gets below 59 everyone gets an “F.” The absolute nature of absolute grading makes it inflexible and constraining for unique situations.

Relative Grading

Relative grading allows for the teacher to interpret the results of an assessment and determine grades based on student performance. One example of this is grading “on the curve.” In this approach, the grades of an assessment are forced to fit a “bell curve” no matter what the distribution is. A hard grade to the curve would look as follows.

A = Top 10% of students
B = Next 25% of students
C = Middle 30% of students
D = Next 25% of students
F = Bottom 10% of students

As such, if the entire class had a score on an exam between 90-100% using relative grading would still create a distribution that is balanced. Whether this is fair or not is another discussion.

Some teachers will divide the class grades by quartiles with a spread from A-D. Others will use the highest grade achieved by an individual student as the A grade and mark other students based on the performance of the best student.

There are times when institutions would set the policy for relative grading. For example, in a graduate school, you may see the following grading scale.

A = top 60%
B = next 30%
C = next 10%
D, F = Should never happen

the philosophy behind this is that in graduate school all the students are excellent so the grades should be better. Earning a “C” is the same as earning an “F.” Earning a “D” or “F” often leads to removal from the program.

Grading Philosophy

There will never be agreement on how to grade. Coming from different backgrounds makes this challenging. For example, some cultures believe that the teacher should prepare the students for exams while others do not. Some cultures believe in self-assessment while others do not. Some cultures believe in a massive summative exam while others do not

In addition, many believe that grades are objective when there is little evidence to support this in academic research. A teacher who thinks students are low performers gives out such grades even if the students are high achievers.

As such, the most reasonable approach is for a school to discuss grading policies and lay out the school’s approach to grading to reduce confusion even if it does not reduce frustration.

Advertisements

Self-Assessment

Generally, education has always focused on some form of external assessor watching the progress of a student. This is by far the standard approach. However, it is not the only way.

An alternative form of assessment is self-assessment. In this approach, the student judges their progress themselves rather than leaning on the judgment of a teacher. In this post, we will look at the pros and cons of self-assessment as well as several ways to incorporate self-assessment into the classroom

Pros and Cons

Some of the advantages of this include the following.

  • Autonomy-The student must be able to ascertain what they are doing well and also wrong
  • Critical thinking skills-This relates to the first bullet. The student must form an opinion about their progress
  • Motivation-Students often are energized by the responsibility of making decisions themselves.

There are also some drawbacks such as the subjectivity of such a form of assessment. However, developing the cognitive skills of self-assessment provide a reasonable tradeoff in many situations.

Types of Self-Assessment

Self-assessment can take one of the following forms

  • Goal setting assessment
  • Assessment of performance
  • General assessment
  • Student-generated test

Goal Setting

Goal setting is the student deciding for themselves what they want to do or achieve in an academic context. The student lays down the criteria and attempts to achieve it. This is an excellent way to boost motivation as many students love to dream even if it is limited to academics.

Assessment of Performance

Performance assessment is the student judging how they did on a specific task. Examples may include assessing their performance of a speech, or essay. Often this is best done with some sort of learning experience that is open ended like the previous examples. IN other words, performance assessment might be meaningless for a multiple-choice quiz since they answer is fixed.

General Assessment

General assessment is assessing one’s performance over time rather than at one specific moment. The student might judge their performance over an entire unit or semester and share their thoughts. This is much more vague in nature but if the student walks away with understanding how to improve it can be beneficial.

Student-Generated Test

Having students generate test items strongly encourages review of course content. The student has to identify what they know and do not know as well as the level of understanding of their peers. This complex metacognitive process always for stronger insights into the content.

Supporting Self-Assessment

As the teacher, it is necessary to consider the following

  1. Clearly, define what needs to be done. This is often best done through giving an example through demonstration of self-reflection.
  2. Consider the format. THe teacher can provide a checklist, surveys, or require students to write a self-assessment. The format depends on the goals as well as the abilities of the students.
  3. Challenge the student’s assessment. Students will often be too harsh or easy on themselves. Having the students explain their position will deepen their critical thinking skills and encourage impartial assessments.

Conclusion

Self-assessment is another potential tool in the classroom. This form of assessment allows students to think and decide where they believe they are in their learning experience. As such, occasional use of this approach is probably beneficial for most students.

Conferencing with Students

Conferences can play a vital role in supporting the growth and improvement of your students. The one-on-one interaction is a priceless use of time for them. In this post, we will look at conferencing and the process for successful use of this idea.

Conferencing

A conference is an opportunity for a teacher and student to discuss one-on-one the students progress in regards to the student’s academic performance. By academic performance, it can mean summative performance or formative.

Conferences can also be used for long term projects such as papers, research, or other more complex assignments.  The length of time does not have to be more than 5-10 minutes in order to provide support. The personal nature of a conference seems to work even in such a short amount of time.

Below are some steps to take when conducting a conference with a student

  1. Explain what is going well
  2. Ask the student if they see any other strengths
  3. Explain what needs to be improve
  4. Ask the student if they see any other problems
  5. Provide suggestions on how to improve weaknesses
  6. Let the student suggest ways to improve
  7. Ask the student if they have any questions

The Good

Begin by sharing what was excellent about the paper. This prepares the student for the bad news. There is almost always something to praise even from the weakest students.

You can also solicit what the student thinks is strong about their paper. This encourages critical thinking as it requires them to form an opinion and provide reasons for it. This also encourages dialog and makes conferences collaborative rather than top-down communication.

Conferences need to be evidence based. This means when something is good you have an example from the paper for the student of what good looks like. THe same applies for bad as well. Concrete examples are what people need to understand and learn.

The Bad

Next, it is time to share the problems with the paper. As the teacher, you point out where improvement is necessary. In addition, you allow the student to share where they think they can do better. Often there is awkward silence but self-reflection is critical to success.

If the student remains silent,  you may elicit a response through asking them questions about their paper that indicates a weakness. Soon the student begins to see the problems for themselves.

The Solution

With problems identified it is important to provide ways to improve. This is where the learning begins. They see what’s wrong and they learn what is right. Naturally, the student can contribute as well to how to improve.

This is also a place where the teacher asks if there are any questions. By this pointing dialoguing has gone on for awhile and questions were probably already asked and answered. However, it is still good to ask one more time in case the student was waiting for whatever reason.

Conclusion

Conferencing is time-consuming but it provides an excellent learning experience for students. It is not necessary for them to be long if there is adequate preparation and there is some sort of structure to the experience.

Types of Rubrics for Writing

Grading essays, papers and other forms of writing is subjective and frustrating for teachers at times. One tool that helps in improving the consistency of the marking, as well as the speed, is the use of rubrics. In this post, we will look at three commonly used rubrics which are…

  • Holistic
  • Analytical
  • Primary trait

Holistic Rubric

A holistic rubric looks at the overall quality of the writing. Normally, there are several levels on the rubric and each level has several descriptors on it. Below is an example template

Presentation1.gifThe descriptors must be systematic which means that they are addressed in each level and in the same order. Below is an actual Holistic Rubric for Writing.

Presentation1In the example above, there are four levels of marking. The descriptors are

  • idea explanation
  • coherency
  • grammar

Between levels, different adverbs and adjectives are used to distinguish the levels.  For example, in level one, “ideas are thoroughly explained” becomes “ideas are explained” in the second level. The use of adverbs is one of the easiest ways to distinguish between levels in a holistic rubric.

Holistic rubrics offer the convenience of fast marking that is easy to interpret and comes with high reliability. The downside is that there is a lack of strong feedback for improvement.

Analytical Rubrics

Analytical rubrics assign a score to each individual attribute the teacher is looking for in the writing. In other words, instead of lumping all the descriptors together as is done in a holistic rubric, each trait is given its own score. Below is a template of an analytical rubric.

Presentation1

You can see that the levels are across the top and the descriptors across the side. Best performance moves from left to right all the way to worst performance. Each level is assigned a range of potential point values.

Below is an actual holistic writing template

Presentation1

Analytical rubrics provide much more washback and learning than holistic. Of course, they also take a  lot more time for the teacher to complete as well.

Primary Trait

A lesser-known way of marking papers is the use of primary trait rubric. With primary trait, the student is only assessed on one specific function of writing. For example, persuasion if they are writing an essay or perhaps vocabulary use for an ESL student writing paragraphs.

The template would be similar to a holistic rubric except that there would only be on descriptor instead of several. The advantage of this is that it allows the teacher and the student to focus on one aspect of writing. Naturally, this can be a disadvantage as writing involves more than one specific skill.

Conclusion

Rubrics are useful for a variety of purposes. For writing, it is critical that you understand what the levels and descriptors are one deciding on what kind of rubric you want to use. In addition, the context affects the use of what type of rubric to use as well.

Types of Writing

This post will look at several types of writing that are done for assessment purposes. In particular, we will look this from the four level of writing which are

  • Imitative
  • Intensive
  • Responsive
  • Extensive

Imitative 

Imitative writing is focused strictly on the grammatical aspects of writing. The student simply reproduces what they see. This is a common way to teach children how to write. Additional examples of activities at this level include cloze task in which the student has to write the word in the blank from a list, spelling test, matching, and even converting numbers to their word equivalent.

Intensive

Intensive writing is more concern about selecting the appropriate word for a given context. Example activities include grammatical transformation, such as changing all verbs to past tense, sequencing pictures, describing pictures, completing short sentences, and ordering task.

Responsive 

Responsive writing involves the development of sentences into paragraphs. The purpose is almost exclusively on the context or function of writing. Form concerns are primarily at the discourse level which means how the sentences work together to make paragraphs and how the paragraphs work to support a thesis statement. Normally no more than 2-3 paragraphs at this level

Example activities at the responsive level include short reports, interpreting visual aids, and summary.

Extensive

Extensive writing is responsive writing over the course of an entire essay or research paper. The student is able to shape a purpose, objectives, main ideas, conclusions, etc. Into a coherent paper.

For many students, this is exceedingly challenging in their mother tongue and is further exasperated in a second language. There is also the experience of multiple drafts of a single paper.

Marking Intensive & Responsive Papers

Marking higher level papers requires a high degree of subjectivity. THis is because of the authentic nature of this type of assessment. As such, it is critical that the teacher communicate expectations clearly through the use of rubrics or some other form of communication.

Another challenge is the issue of time. Higher level papers take much more time to develop. This means that they normally cannot be used as a form of in class assessment. If they are used as in class assessment then it leads to a decrease in the authenticity of the assessment.

Conclusion

Writing is a critical component of the academic experience. Students need to learn how to shape and develop their ideas in print. For teachers, it is important to know at what level the student is capable of writing at in order to support them for further growth.

Reading Assessment at the Interactive and Extensive Level

In reading assessment, the interactive and extensive level are the highest levels of reading. This post will provide examples of assessments at each of these two levels.

Interactive Level

Reading at this level is focused on both form and meaning of the text with an emphasis on top-down processing. Below are some assessment examples

Cloze

Cloze assessment involves removing certain words from a paragraph and expecting the student to supply them. The criteria for removal is every nth word aka fixed-ratio or removing words with meaning aka rational deletion.

In terms of marking, you have the choice of marking based on the student providing the exact wording or an appropriate wording. The exact wording is strict but consistent will appropriate wording can be subjective.

Read and Answer the Question

This is perhaps the most common form of assessment of reading. The student simply reads a passage and then answer questions such as T/F, multiple choice, or some other format.

Information Transfer

Information transfer involves the students interpreting something. For example, they may be asked to interpret a graph and answer some questions. They may also be asked to elaborate on the graph, make predictions, or explain. Explaining a visual is a common requirement for the IELTS.

Extensive Level

This level involves the highest level of reading. It is strictly top-down and requires the ability to see the “big picture” within a text. Marking at this level is almost always subjective.

Summarize and React

Summarizing and reacting requires the student to be able to read a large amount of information, share the main ideas, and then providing their own opinion on the topic. This is difficult as the student must understand the text to a certain extent and then form an opinion about what they understand.

I like to also have my students write several questions they have about the text This teaches them to identify what they do not know. These questions are then shared in class so that they can be discussed.

For marking purposes, you can provide directions about a number of words, paragraphs, etc. to provide guidance. However, marking at this level of reading is still subjective. The primary purpose of marking should probably be evidence that the student read the text.

Conclusion

The interactive and extensive level of reading is when teaching can become enjoyable. Students have moved beyond just learning to read to reading to learn. This opens up many possibilies in terms of learning experiences.

Reading Assessment at the Perceptual and Selective Level

This post will provide examples of assessments that can be used for reading at the perceptual and selective level.

Perceptual Level

The perceptual level is focused on bottom-up processing of text. Comprehension ability is not critical at this point. Rather, you are just determining if the student can accomplish the mechanical process of reading.

Examples

Reading Aloud-How this works is probably obvious to most teachers. The students read a text out loud in the presence of an assessor.

Picture-Cued-Students are shown a picture. At the bottom of the picture are words. The students read the word and point to a visual example of it in the picture. For example, if the picture has a cat in it. At the bottom of the picture would be the word cat. The student would read the word cat and point to the actual cat in the picture.

This can be extended by using sentences instead of words. For example, if the actual picture shows a man driving a car. There may be a sentence at the bottom of the picture that says “a man is driving a car”. The student would then point to the man in the actual picture who is driving.

Another option is T/F statements. Using our cat example from above. We might write that “There is one cat in the picture” the student would then select T/F.

Other Examples-These includes multiple-choice and written short answer.

Selective Level

The selective level is the next above perceptual. At this level, the student should be able to recognize various aspects of grammar.

Examples

Editing Task-Students are given a reading passage and are asked to fix the grammar. This can happen many different ways. They could be asked to pick the incorrect word in a sentence or to add or remove punctuation.

Pictured-Cued Task-This task appeared at the perceptual level. Now it is more complicated. For example, the students might be required to read statements and label a diagram appropriately, such as the human body or aspects of geography.

Gap-Filling Task-Students read a sentence and complete it appropriately

Other Examples-Includes multiple-choice and matching. The multiple-choice may focus on grammar, vocabulary, etc. Matching attempts to assess a students ability to pair similar items.

Conclusion

Reading assessment can take many forms. The examples here provide ways to deal with this for students who are still highly immature in their reading abilities. As fluency develops more complex measures can be used to determine a students reading capability.

Assessing Speaking in ESL

In this post, we will look at different activities that can be used to assess a language learner’s speaking ability, Unfortunately, will not go over how to mark or give a grade for the activities we will only provide examples.

Directed Response

In this activity, the teacher tries to have the student use a particular grammatical form by having the student modify something the teacher says. Below is an example.

Teacher: Tell me he went home
Student: He went home

This is obviously not deep. However, the student had to know to remove the words “tell me” from the sentence and they also had to know that they needed to repeat what the teacher said. As such, this is an appropriate form of assessment for beginning students.

Read Aloud

Read aloud is simply having the student read a passage verbatim out loud. Normally, the teacher will assess such things as pronunciation and fluency. There are several problems with this approach. First, reading aloud is not authentic as this is not an in demand skill in today’s workplace. Second, it blends reading with speaking which can be a problem if you do not want to assess both at the same time.

Oral Questionnaires 

Students are expected to respond and or complete sentences. Normally, there is some sort of setting such as a mall, school, or bank that provides the context or pragmatics. below is an example in which a student has to respond to a bank teller. The blank lines indicate where the student would speak.

Teacher (as bank teller): Would you like to open an account?
Student:_______________________
Teacher (as bank teller): How much would you like to deposit?
Student:___________________________

Visual Cues

Visual cues are highly opened. For example, you can give the students a map and ask them to give you directions to a location on the map. In addition, students can describe things in the picture or point to things as you ask them too. You can also ask the students to make inferences about what is happening in a picture. Of course, all of these choices are highly difficult to provide a grade for and may be best suited for formative assessment.

Translation

Translating can be a highly appropriate skill to develop in many contexts. In order to assess this, the teacher provides a word, phrase, or perhaps something more complicated such as directly translating their speech. The student then Takes the input and reproduces it in the second language.

This is tricky to do. For one, it is required to be done on the spot, which is challenging for anybody. In addition, this also requires the teacher to have some mastery of the student’s mother tongue, which for many is not possible.

Other Forms

There are many more examples that cannot be covered here. Examples include interviews, role play, and presentations. However, these are much more common forms of speaking assessment so for most they are already familiar with these.

Conclusion

Speaking assessment is a major component of the ESL teaching experience. The ideas presented here will hopefully provide some additionals ways that this can be done.

Responsive Listening Assessment

Responsive listening involves listening to a small amount of a language such as a command, question, or greeting. After listening, the student is expected to develop an appropriate short response. In this post, we will examine two examples of the use of responsive listening. These two examples are…

  • Open-ended response to question
  • Suitable response to a question

Open-Ended Responsive Listening

When an open-ended item is used in responsive listening it involves the student listening to a question and provided an answer that suits the context of the question. For example,

Listener hears: What country are you from
Student writes: _______________________________

Assessing the answer is determined by whether the student was able to develop an answer that is appropriate. The opened nature of the question allows for creativity and expressiveness.

A drawback to the openness is determining the correctness of them. You have to decide if misspellings, synonyms, etc are wrong answers.  There are strong arguments for and against any small mistake among ESL teachers. Generally, communicate policies trump concerns of grammatical and orthography.

Suitable Response to a Question

Suitable response items often use multiple choice answers that the student select from in order to complete the question. Below is an example.

Listener hears: What country is Steven from
Student picks:
a. Thailand
b. Cambodia
c. Philippines
d. Laos

Based on the recording the student would need to indicate the correct response. The multiple-choice limits the number of options the student has in replying. This can in many ways making determining the answer much easier than a short answer. No matter what, the student has a 25% chance of being correct in our example.

Since multiple-choice is used it is important to remember that all the strengths and weaknesses of multiple-choice items.This can be good or bad depending on where your students are at in their listening ability.

Conclusion

Responsive listening assessment allows a student to supply an answer to a question that is derived from what they were listening too.This is in many ways a practical way to assess an individual’s basic understanding of a conversation.

Intensive Listening and ESL

Intensive listening is listening for the elements (phonemes, intonation, etc.) in words and sentences. This form of listening is often assessed in an ESL setting as a way to measure an individual’s phonological,  morphological, and ability to paraphrase. In this post, we will look at these three forms of assessment with examples.

Phonological Elements

Phonological elements include phonemic consonant and phonemic vowel pairs. Phonemic consonant pair has to do with identifying consonants. Below is an example of what an ESL student would hear followed by potential choices they may have on a multiple-choice test.

Recording: He’s from Thailand

Choices:
(a) He’s from Thailand
(b) She’s from Thailand

The answer is clearly (a). The confusion is with the adding of ‘s’ for choice (b). If someone is not listening carefully they could make a mistake. Below is an example of phonemic pairs involving vowels

Recording: The girl is leaving?

Choices:
(a)The girl is leaving?
(b)The girl is living?

Again, if someone is not listening carefully they will miss the small change in the vowel.

Morphological Elements

Morphological elements follow the same approach as phonological elements. You can manipulate endings, stress patterns, or play with words.  Below is an example of ending manipulation.

Recording: I smiled a lot.

Choices:
(a) I smiled a lot.
(b) I smile a lot.

I sharp listener needs to hear the ‘d’ sound at the end of the word ‘smile’ which can be challenging for ESL student. Below is an example of stress pattern

Recording: My friend doesn’t smoke.

Choices:
(a) My friend doesn’t smoke.
(b) My friend does smoke.

The contraction in the example is the stress pattern the listener needs to hear. Below is an example of a play with words.

Recording: wine

Choices:
(a) wine
(b) vine

This is especially tricky for languages that do not have both a ‘v’ and ‘w’ sound, such as the Thai language.

Paraphrase recognition

Paraphrase recognition involves listening to an example of being able to reword it in an appropriate manner. This involves not only listening but also vocabulary selection and summarizing skills. Below is one example of sentence paraphrasing

Recording: My name is James. I come from California

Choices:
(a) James is Californian
(b) James loves Calfornia

This is trickier because both can be true. However, the goal is to try and rephrase what was heard.  Another form of paraphrasing is dialogue paraphrasing as shown below

Recording: 

Man: My name is Thomas. What is your name?
Woman: My name is Janet. Nice to meet you. Are you from Africa
Man: No, I am an American

Choices:
(a) Thomas is from America
(b)Thomas is African

You can see the slight rephrase that is wrong with choice (b). This requires the student to listen to slightly longer audio while still have to rephrase it appropriately.

Conclusion

Intensive listening involves the use of listening for the little details of an audio. This is a skill that provides a foundation for much more complex levels of listening.

Critical Language Testing

Critical language testing (CLT) is a philosophical approach that states that there is widespread bias in language testing. This view is derived from critical pedagogy, which views education as a process manipulated by those in power.

There are many criticisms that CLT has of language testing such as the following.

  • Test are deeply influenced by the culture of the test makers
  • There is  a political dimension to tests
  • Tests should provide various modes of performance because of the diversity in how students learn.

Testing and Culture

CLT claim that tests are influenced by the culture of the test-makers. This puts people from other cultures at a disadvantage when taking the test.

An example of bias would be a reading comprehension test that uses a reading passage that reflects a middle class, white family. For many people, such an experience is unknown for them. When they try to answer the questions they lack the contextual knowledge of someone who is familiar with this kind of situation and this puts outsiders at a disadvantage.

Although the complaint is valid there is little that can be done to rectify it. There is no single culture that everyone is familiar with. The best that can be done is to try to diverse examples for a diverse audience.

Politics and Testing

Politics and testing is closely related to the prior topic of culture. CLT claims that testing can be used to support the agenda of those who made the test. For example, those in power can make a test that those who are not in power cannot pass. This allows those in power to maintain their hegemony. An example of this would be the literacy test that African Americans were

An example of this would be the literacy test that African Americans were required to pass in order to vote. Since most African MAericans could not read the were legally denied the right to vote. This is language testing being used to suppress a minority group.

Various Modes of Assessment

CLT also claims that there should be various modes of assessing. This critique comes from the known fact that not all students do well in traditional testing modes. Furthermore, it is also well-documented that students have multiple intelligences.

It is hard to refute the claim for diverse testing methods. The primary problem is the practicality of such a request. Various assessment methods are normally impractical but they also affect the validity of the assessment. Again, most of the time testing works and it hard to make exceptions.

Conclusion

CLT provides an important perspective on the use of assessment in language teaching. These concerns should be in the minds of test makers as they try to continue to improve how they develop assessments. This holds true even if the concerns of CLT cannot be addressed.

 

Developing Standardized Tests

For better or worst, standardized testing is a part of the educational experience of most students and teachers. The purpose here is not to attack or defend their use. Instead, in this post, we will look at how standardized tests are developed.

There are primarily about 6 steps in developing a standardized test. These steps are

  1. Determine the goals
  2. Develop the specifications
  3. Create and evaluate test items
  4. Determine scoring and reporting
  5. Continue further development

Determining Goals

The goals of a standardized test are similar to the purpose statement of a research paper in that the determine the scope of the test. By scope, it is meant what the test will and perhaps will not do. This is important in terms of setting the direction for the rest of the project.

For example, the  TOEFL purpose is to evaluate English proficiency. This means that the TOEFL does not deal with science, math, or other subjects. This seems silly for many but this purpose makes it clear what the TOEFL is about.

Develop the Specifications

Specifications have to do with the structure of the test. For example, a test can have multiple-choice, short answer, essay, fill in the blank, etc. The structure of the test needs to be determined in order to decide what types of items to create.

Most standardized tests are primarily multiple-choice. This is due to the scale on which the tests are given. However, some language tests are including a writing component as well now.

Create Test Items

Once the structure is set it is now necessary to develop the actual items for the test. This involves a lot with item response theory (IRT) and the use of statistics. There is also a need to ensure that the items measure the actual constructs of the subject domain.

For example, the TOEFL must be sure that it is really measuring language skills. This is done through consulting experts as well as statistical analysis to know for certain they are measuring English proficiency. The items come from a bank and are tested and retested.

Determine Scoring and Reporting

The scoring and reporting need to be considered. How many points is each item worth? What is the weight of one section of the test? Is the test norm-referenced or criterion-referenced? How many people will mark each test?These are some of the questions to consider.

The scoring and reporting matter a great deal because the scores can affect a person’s life significantly. Therefore, this aspect of standardized testing is treated with great care.

Further Development

A completed standardized test needs to be continuously reevaluated. Ideas and theories in a body of knowledge change frequently and this needs to be taken into account as the test goes forward.

For example, the SAT over the years has changed the point values of their test as well as added a writing component. This was done in reaction to concerns about the test.

Conclusion

The concepts behind developing standardize test can be useful for even teachers making their own assessments. There is no need to follow this process as rigorously. However, familiarity with this strict format can help guide assessment development for many different situations.

Item Indices for Multiple Choice Questions

Many teachers use multiple choice questions to assess students knowledge in a subject matter. This is especially true if the class is large and marking essays would provide to be impractical.

Even if best practices are used in making multiple choice exams it can still be difficult to know if the questions are doing the work they are supposed too. Fortunately, there are several quantitative measures that can be used to assess the quality of a multiple choice question.

This post will look at three ways that you can determine the quality of your multiple choice questions using quantitative means. These three items are

  • Item facility
  • Item discrimination
  • Distractor efficiency

Item Facility

Item facility measures the difficulty of a particular question. This is determined by the following formula

Item facility = Number of students who answer the item correctly
Total number of students who answered the item

This formula simply calculates the percentage of students who answered the question correctly. There is no boundary for a good or bad item facility score. Your goal should be to try and separate the high ability from the low ability students in your class with challenging items with a low item facility score. In addition, there should be several easier items with a high item facility score for the weaker students to support them as well as serve as warmups for the stronger students.

Item Discrimination

Item discrimination measures a questions ability to separate the strong students from the weak ones.

Item discrimination = # items correct of strong group – # items correct of weak group
1/2(total of two groups)

The first thing that needs to be done in order to calculate the item discrimination is to divide the class into three groups by rank. The top 1/3 is the strong group, the middle third is the average group and the bottom 1/3 is the weak group. The middle group is removed and you use the data on the strong and the weak to determine the item discrimination.

The results of the item discrimination range from zero (no discrimination) to 1 (perfect discrimination). There are no hard cutoff points for item discrimination. However, values near zero are generally removed while a range of values above that is expected on an exam.

Distractor Efficiency

Distractor efficiency looks at the individual responses that the students select in a multiple choice question. For example, if a multiple choice has four possible answers, there should be a reasonable distribution of students who picked the various possible answers.

The Distractor efficiency is tabulated by simply counting the which answer students select for each question. Again there are no hard rules for removal. However, if nobody selected a distractor it may not be a good one.

Conclusion

Assessing multiple choice questions becomes much more important as the size of class grows bigger and bigger or the test needs to be reused multiple times in various context. This information covered here is only an introduction to the much broader subject of item response theory.

Tips for Developing Tests

Assessment is a critical component of education. One form of assessment  that is commonly used is testing. In this post, we will look at several practical tips for developing tests.

Consider the Practicality

When developing a test, it is important to consider the time constraints, as well as the time it will take to mark the test. For example, essays are great form of assessment that really encourage critical thinking. However, if the class has 50 students the practicality of essays test quickly disappears.

The point is that the context of teaching moves what is considered practical. What is practical can change from year to year while adjusting to new students.

Think about the Reliability

Relibility is the consistency of the score that the student earns. THis can be affected by the setting of the test as well as the person who marks the test. It is difficult to maintain consistency when marking subject answers such as short and answer and or essay. However, it is important that this is still done.

Consider Validity

Validity in this context has to do with whether the test covers objects that were addressed  in the actual teaching. Assessing this is subject but needs to be considered. What is taught is what should be on the test. This is easier said than done as poor planning can lead to severally poor testing.

The students also need to be somewhat convince that the testing is appropriate. If not it can lead to problems and complaints. Furthermore, an invalid test from the students perspective can lead to cheating as the students will cheat in order to survive.

Make it Aunthentic 

Tests, if possible, should mimic real-world behaviors whenever possible. This enhances relevance and validity for students. One of the main problems with authentic assessment is what to do when it is time to mark them. The real-world behaviors cannot always be reduced to a single letter grade. This concern is closely relates to practicality.

Washback

Washback is the experience of learning from an assessment. This normally entails some sort of feedback that the teacher provides the student. the feedbag they give. This personal attention encourages reflection which aides in comprehension. Often, it will happen after the testing as the answers are reviewed.

Conclusion

Tests can be improved by keeping in mind the concepts addressed in this post. Teachers and students can have better experiences with testing by maintaining practical assessments that are valid, provide authentic experiences as well insights into how to improve.

Washback

Washback is the effect that testing has on teaching and learning. This term is commonly used in used in language assessment but it is not limited to only that field. One of the primary concerns of many teachers is developing that provide washback or that enhances students learning and understanding of ideas in a class.

This post will discuss three ways in which washback can be improved in a class. The three ways are…

  • Written feedback on exams
  • Go over the results as a class
  • Meetings with students on exam performance

Written Feedback

Exams or assignments that are highly subjective (ie essays) require written feedback in order to provide washback. This means specific, personalized feedback for each student. This is a daunting task for most teachers especially as classes get larger. However, if your goal is to improve washback providing written comments is one way to achieve this.

The letter grade or numerical score a student receives on a test does not provide insights into how the student can improve. The reasoning behind what is right or wrong can be provided in the written feedback.

Go Over Answers in Class

Perhaps the most common way to enhance feedback is to go over the test in class. This allows the students to learn what the correct answer is, as well as why one answer is the answer. In addition, students are given time to ask questions and clarification of the reasoning behind the teacher’s marking.

If there were common points of confusion, going over the answers in this way allows for the teacher to reteach the confusing concepts. In many ways, the test revealed what was unclear and now the teacher is able to provide support to achieve mastery.

One-on-One Meetings

For highly complex and extremely subjective forms of assessments (ie research paper) one-on-one meetings may be the most appropriate. This may require a more personal touch and a greater deal of time.

During the meeting, students can have their questions addressed and learn what they need to do in order to improve. This is a useful method for assignments that require several rounds of feedback in order to be completed.

Conclusion

Washback, if done properly, can help with motivation, autonomy, and self-confidence of students. What this means is that assessment should not only be used for grades but also to develop learning skills.

Understanding Testing

Testing is standard practice in most educational context. A teacher needs a way to determine what level of knowledge the students currently have or have gained through the learning experience. However, identifying what testing is and is not has not always been clear.

In this post, we will look at exactly what testing as. In general, testing is a way of measuring a person’s ability and or knowledge in a given are of study. Specifically, there are five key characteristics of a test, and they are…

  • Systematic
  • Quantifiable
  • Individualistic
  • Competence
  • Domain specific

Systematic

A test must be well organized and structured. For example, the multiple choice are in one section while the short answers are in a different section. If an essay is required there is a rubric for grading. Directions for all sections are in the test to explain the expectations to the students.

This is not as easy or as obvious as some may believe. Developing a test takes a great deal of planning for the actual creation of the test.

Quantifiable

Test are intended to measure something. A test can measure general knowledge such as proficiency test of English or a test can be specific such as a test that only looks at vocabulary memorization. Either way, it is important for both the student and teacher to know what is being measured.

Another obvious but sometimes mistake by test makers is the reporting of results. How many points each section and even each question is important for students to know when taking a test. This information is also critical for the person who is responsible for grading the tests.

Individualistic 

Test are primarily designed to assess a student’s individual knowledge/performance. This is a Western concept of the responsibility of a person to have an individual expertise in a field of knowledge.

There are examples of groups working together on tests. However, group work is normally left to projects and not formal modes of assessment such as testing.

Competence

As has already been alluded too, tests assess competence either through the knowledge a person has about a subject or their performance doing something. For example, a vocabulary test assesses knowledge of words while a speaking test would assess a person ability to use words or their performance.

Generally, a test is either knowledge or performance based.  it is possible to blend the two, however, mixing styles raises the complexity not only for the student but also for the person who s responsible for marking the results.

Domain Specific

A test needs to be focused on a specific area of knowledge. A language test is specific to language as an example. A teacher needs to know in what specific area they are trying to assess students knowledge/performance. This not always easy to define as not only are there domains but sub-domains and many other ways to divide up the information in a given course.

Therefore, a teacher needs to identify what students need to know as well as what they should know and assess this information when developing a test. This helps to focus the test on relevant content for the students.

Conclusion

There is art and science to testing. There is no simple solution to how to setup tests to help students. However, the five concepts here provides a framework that can help a teacher to get started in developing tests.

Discrete-Point and Integrative Language Testing Methods

Within language testing, there has arisen over time at least two major viewpoints on assessment. Originally,  the view was that assessing language should look specific elements of a language or you could say that language assessment should look at discrete aspects of the language.

A reaction to this discrete methods came about with the idea that language is wholistic so testing should be integrative or address many aspects of language simultaneously. In this post, we will take a closer look at discrete and integrative language testing methods through providing examples of each along with a comparison.

Discrete-Point Testing

Discrete-point testing works on the assumption that language can be reduced to several discrete component “points” and that these “points” can be assessed. Examples of discrete-point test items in language testing include multiple choice, true/false, fill in the blank, and spelling.

What all of these example items have in common is that they usually isolate an aspect of the language from the broader context. For example, a simple spelling test is highly focused on the orthographic characteristics of the language. True/false can be used to assess knowledge of various grammar rules etc.

The primary criticism of discrete-point testing was its discreteness. Many believe that language is wholistic and that in the real world students will never have to deal with language in such an isolated way. This led to the development of integrative language testing methods.

Integrative Language Testing Methods

Integrative language testing is based on the unitary trait hypothesis, which states that language is indivisible. This is in complete contrast to discrete-point methods which supports dividing language into specific components.  Two common integrative language assessments include cloze test and dictation.

Cloze test involves taking an authentic reading passage and removing words from it. Which words remove depends on the test creator. Normally, it is every 6th or 7th word but it could be more or less or only the removal of key vocabulary. In addition, sometimes potential words are given to the student to select from or sometimes the list of words is not given to the student

The student’s job is to look at the context of the entire story to determine which words to write into the blank space.  This is an integrative experience as the students have to consider grammar, vocabulary, context, etc. to complete the assessment.

Dictation is simply writing down what was heard. This also requires the use of several language skills simultaneously in a realistic context.

Integrative language testing also has faced criticism. For example, discrete-point testing has always shown that people score differently in different language skills and this fact has been replicated in many studies. As such, the exclusive use of integrative language approaches is not supported by most TESOL scholars.

Conclusion

As with many other concepts in education, the best choice between discrete-point and integrative testing is a combination of both. The exclusive use of either will not allow the students to demonstrate mastery of the language.

Distributed Practice: A Key Learning Technique

A key concept in teaching and learning is the idea of distributed practice. Distributed practice is a process in which the teacher deliberately arranges for their students to practice a skill or use knowledge in many learning sessions that are short in length and distributed over time.

The purpose behind employing distributed practice is to allow for the reinforcement of the material in the student’s mind through experiencing the content several times. In this post, we will look at pros and cons of distributed practice as well as practical applications of this teaching technique

Pros and Cons

Distributed practice helps to maintain student motivation through requiring short spans of attention and motivation. For most students, it is difficult to study anything for long periods of time. Through constant review and exposure, students become familiar with the content.

Another benefit is the prevention of mental and physical fatigue. This is related to the first point. Fatigue interferes with information processing. Therefore, a strategy that reduces fatigue can help in students’ learning new material.

However, there are times when short intense sessions are not enough to achieving mastery. Project learning may be one example. When completing a project, it often requires several long stretches of completing tasks that are not conducive to distributed practice.

Application Examples

When using distributed practice it is important to remember to keep the length of the practice short. This maintains motivation. In addition, the time between sessions should initial be short as well and lengthen as mastery develops. If the practice sessions are too far a part, students will forget.

Lastly, the skill should be practiced over and over for a long period of time. How long depends on the circumstances. The point is that distributed practice takes a commitment to returning to a concept the students need to master over a long stretch of time.

One of the most practical examples of distributed practice may be in any curriculum that employs a spiral approach. A spiral curriculum is one in which key ideas are visited over and over through a year or even over several years of curriculum.

For our purposes, distributed practice is perhaps a spiral approach employed within a unit plan or over the course of a semester. This can be done in many ways such as.

  • The use of study guides to prepare for quizzes
  • Class discussion
  • Student presentations of key ideas
  • Collaborative project

The primary goal should be to employ several different activities that require students to return to the same material from different perspectives.

Conclusions

Distributed practice is a key teaching technique that many teachers employ even if they are not familiar with the term. Students cannot see any idea or skill once. There must be exposed several times in order to develop mastery of the skill. As such, understanding how to distribute practice is important for student learning.

Direct and Indirect Test Items

In assessment, there are two categories that most test items fall into which are direct and indirect test items. Direct test items ask the student to complete some sort of authentic action. Indirect test items measure a students knowledge about a subject. This post will provide examples of test items that are either direct or indirect items.

Direct Test Items

Direct test items used authentic assessment approaches. Examples in TESOL would include the following…

  • For speaking: Interviews and presentations
  • For writing: Essay questions
  • For reading: Using real reading material and having the student respond to question verbally and or in writing
  • For listening: Following oral directions to complete a task

The primary goal of direct test items is to be as much like real-life as possible. Often, direct testing items are integrative, which means that the student has to apply several skills at once. For example, presentations involve more than just speaking but also the writing of the speech, the reading or memorizing of the speech as well as the critical thinking skills to develop the speech.

Indirect Test Items

Indirect test items assess knowledge without authentic application. Below are some common examples of indirect test items.

  • Multiple choice questions
  • Cloze items
  • Paraphrasing
  • Sentence re-ordering

Multiple Choice

Multiple choice questions involve the use of a question followed by several potential answers. It is the job of the student to determine what is the most appropriate answer. Some challenges with writing multiple choice are the difficulty of writing incorrect choices. For every correct answer, you need several wrong ones. Another problem is that with training, students can learn how to improve their success on multiple choice test without having a stronger knowledge of the subject matter.

Cloze Items

Cloze items involve giving the student a paragraph or sentence with one or more blanks in it that the student has to complete. One problem with Cloze items is that more than one answer may be acceptable for a blank. This can lead to a great deal of confusion when marking the test.

Paraphrasing

Paraphrasing is strictly for TESOL and involves having the student rewrite a sentence in a slightly different way as the example below.

“I’m sorry I did not go to the assembly”

I wish________________________________

In the example above the student needs to write the sentence in quotes starting with the phrase “I wish.” The challenging is determining if the paraphrase is reasonable as this is highly subjective.

Sentence Re-Ordering

In this item for TESOL assessment, a student is given a sentence that is out of order and they have to arrange the words so that an understandable sentence is developed. This one way to assess knowledge of syntax. The challenge is that for complex sentences more than one answer may be possible

It is important to remember that all indirect items can be integrative or discrete-point. Unlike integrative, discrete point only measures one narrow aspect of knowledge at a time.

Conclusion

A combination of direct and indirect test items would probably best ensure that a teacher is assessing students so that they have success. What mixture of the two to use always depends on the context and needs of the students.

Evan-Moor parent resources

Test Validity

Validity is often seen as a close companion of reliability. Validity is the assessment of the evidence that indicates that an instrument is measuring what it claims to measure. An instrument can be highly reliable (consistent in measuring something) yet lack validity. For example, an instrument may reliably measure motivation but not valid in measuring income. The problem is that an instrument that measures motivation would not measure income appropriately.

In general, there are several ways to measure validity, which includes the following.

  • Content validity
  • Response process validity
  • Criterion-related evidence of validity
  • Consequence testing validity
  • Face validity

Content Validity

Content validity is perhaps the easiest way to assess validity. In this approach, the instrument is given to several experts who assess the appropriateness or validity of the instrument. Based on their feedback, a determination of the validity is determined.

Response Process Validity

In this approach, the respondents to an instrument are interviewed to see if they considered the instrument to be valid. Another approach is to compare the responses of different respondents for the same items on the instrument. High validity is determined by the consistency of the responses among the respondents.

Criterion-Related Evidence of Validity

This form of validity involves measuring the same variable with two different instruments. The instrument can be administered over time (predictive validity) or simultaneously (concurrent validity). The results are then analyzed by finding the correlation between the two instruments. The stronger the correlation implies the stronger validity of both instruments.

Consequence Testing Validity

This form of validity looks at what happened to the environment after an instrument was administered. An example of this would be improved learning due to test. Since the the students are studying harder it can be inferred that this is due to the test they just experienced.

Face Validity

Face validity is the perception that the students have that a test measures what it is supposed to measure. This form of validity cannot be tested empirically. However, it should not be ignored. Students may dislike assessment but they know if a test is testing what the teacher tried to teach them.

Conclusion 

Validity plays an important role in the development of instruments in quantitative research. Which form of validity to use to assess the instrument depends on the researcher and the context that he or she is facing.

Assessing Reliability

In quantitative research, reliability measures an instruments stability and consistency. In simpler terms, reliability is how well an instrument is able to measure something repeatedly. There are several factors that can influence reliability. Some of the factors include unclear questions/statements, poor test administration procedures, and even the participants in the study.

In this post, we will look at different ways that a researcher can assess the reliability of an instrument. In particular, we will look at the following ways of measuring reliability…

  • Test-retest reliability
  • Alternative forms reliability
  • Kuder-Richardson Split Half Test
  • Coefficient Alpha

Test-Retest Reliability

Test-retest reliability assesses the reliability of an instrument by comparing results from several samples over time. A researcher will administer the instrument at two different times to the same participants. The researcher then analyzes the data and looks for a correlation between the results of the two different administrations of the instrument. in general, a correlation above about 0.6 is considered evidence of reasonable reliability of an instrument.

One major drawback of this approach is that often given the same instrument to the same people a second time influences the results of the second administration. It is important that a researcher is aware of this as it indicates that test-retest reliability is not foolproof.

Alternative Forms Reliability 

Alternative forms reliability involves the use of two different instruments that measure the same thing. The two different instruments are given to the same sample. The data from the two instruments are analyzed by calculating the correlation between them. Again, a correlation around 0.6 or higher is considered as an indication of reliability.

The major problem with this is that it is difficult to find two instruments that really measure the same thing. Often scales may claim to measure the same concept but they may both have different operational definitions of the concept.

Kuder-Richardson Split Half Test

The Kuder-Richardson test involves the reliability of categorical variables. In this approach, an instrument is cut in half and the correlation is found between the two halves of the instrument. This approach looks at internal consistency of the items of an instrument.

Coefficient Alpha

Another approach that looks at internal consistency is the Coefficient Alpha. This approach involves administering an instrument and analyze the Cronbach Alpha. Most statistical programs can calculate this number. Normally, scores above 0.7 indicate adequate reliability. The coefficient alpha can only be used for continuous variables like Lickert scales

Conclusion

Assessing reliability is important when conducting research. The approaches discussed here are among the most common. Which approach is best depends on the circumstances of the study that is being conducted.

Reasons for Testing

Testing is done for many different reasons in various fields such as education,  business, and even government. There are many motivations that people have for using evaluation. In this post, we will look at four reasons that testing is done. The five reasons are…

  • For placement
  • For diagnoses
  • For assessing progress
  • For determining proficiency
  • For providing evidence of competency

For Placement

Placement test serve the purpose of determining at what level a student should be placed. There are often given at the beginning of a student’s learning experience at an institution, often before taking any classes. Normally, the test will consist of specific subject knowledge that a student needs to know in order to have success at a certain level.

For Diagnoses

Diagnostic test are for identifying weaknesses or learning problems. There similar to a doctor looking over a patient and trying to diagnose the patients health problem. Diagnostic test help in identifying gaps in knowledge and help a teacher to know what they need to do to help their students.

For Assessing Progress

Progress test are used to assess how the students are doing in comparison to the goals and objectives of the curriculum.  At the university level, these are the mid-terms and final exams that students take. How well the students is able to achieve the objects of the course is measured by progress test.

For Determining Proficiency 

Testing for proficiency provides a snapshot of the student is able to do right now. They do not provide a sign of weaknesses like diagnoses nor do they assess progress in comparison to a curriculum like progress test. Common examples of this type of test are test that are used to determine admission into a program such as the SAT, MCAT, or GRE.

For Providing Evidence of Proficiency 

Sometimes, people are not satisfied with traditional means of evaluation. For them, they want to see what the student can do by having the student through examining the students performance over several assignments over the course of a semester. This form of assessment  provides a way of having students produce work that demonstrates improvement in the classroom.

One of the most common forms of assessment that provides evidence of proficiency is the portfolio. In this approach, the students collect assignments that they have done over the course of the semester to submit. The teacher is able to see how the progress as he sees the students’ improvement over time. Such evidence is harder to track through using tests.

Conclusions

How to assess is best left for the teacher to decide. However, teachers need options that they can use when determining how to assess their students. The examples provided here give teachers ideas on what can assessment they can use in various situations.

Giving Feedback on Written Work

Marking papers and providing feedback is always a chore. However, nothing seems to be more challenging in teaching then providing feedback for written work. There are so many things that can go wrong when students write. Furthermore, the mistakes made are often totally unique to each student. This makes it challenging to try and solve problems by teaching all the students at once. Feedback for writing must often be tailor-made for each student. Doing this for a small class is doable but few have the luxury of teaching a handful of students.

Despite the challenge, there are several practical ways to streamline the experience of providing feedback for papers. Some ideas include the following

  • Structuring the response
  • Training the students
  • Understanding your purpose for marking

Structuring the Response

A response to a student should include the following two points

  1. What went well (positive feedback)
  2. What needs to improvement (constructive feedback)

The response should be short and sweet. No more than a few sentences. It is not necessary to report every flaw to the student. Rather, point out the majors and deal with other problems later.

If it is too hard to try and explain what went wrong sometimes providing an example of a rewritten paragraph from the student’s paper is enough to give feedback. The student compares your writing with their own to see what needs to be done.

Training Students

Students need to know what you want. This means that clear communication about expectations saves time on providing feedback. Providing rubrics is one way of lessen a teacher’s workload. Students see the expectations for the grade they want and target those expectations accordingly. The rubric also helps the teacher to be more consistent in marking papers and providing feedback.

Peer-evaluation is another tool for saving time. Students are more likely to think about what they are doing when hearing it from peers. In addition, students can find some of the smaller problems, such as grammar, so that the teacher can focus on shaping the ideas of the paper. Depending on the maturity of the students, it is better to let them look at it before you invest any energy in providing feedback.

What’s Your Purpose

Many teachers will mark papers and try to catch everything every single time. This means that they are looking at the flow of the paragraph, the connection of the main ideas, will also catch typos and grammatical mistakes. This approach is often overwhelming and extremely time-consuming. In addition, it is discouraging to students who receive papers that are covered in red.

Another approach is what is called selective marking. Selective marking is when a teacher focuses only on specific issues in a paper. For example, a teacher might only focus on paragraph organization for a first draft and focus on the overall flow of the paper later. With this focus, the teacher and students can handle similar issues at the same time that are much more defined than checking everything at once.

Personally, I believe it is best to focus on macro issues such as paragraph organization and overall consistency first before focusing on grammatical issues. If the ideas are going in the right direction it is easy to spot grammar issues. In addition, if the students know English well, most grammar issues are irritating rather than completely crippling in understanding the thrust of the paper. However, perfect grammar without a thesis is a hopeless paper.

Conclusion 

There is no reason to overwork ourselves in marking papers. Basic adjustments in strategy can lead to students who are provided feedback without a teacher over doing it.

Dealing with Mistakes and Providing Feedback

Students are in school to learn. We learn most efficiently when we make mistakes. Understanding how students make mistakes and the various types of mistakes that can happen can help teachers to provide feedback.

Julian Edge describes three types of mistakes

  • Slips-miscalculations that students make that they can fix themselves
  • Errors-Mistakes students cannot fix on their own but require assistance
  • Attempts-A student tries but does not yet know how to do it

It is the last two as a teacher that we are most concern. Helping students with errors and providing assistance with attempts is critical to the development of student learning.

Assessing Students

Students need to know at least two things whenever they are given feedback

  1. What they did well (positive feedback)
  2. What they need to do in order to improve (constructive feedback)

Positive feedback provides students with an understanding of what they have mastered. Whatever they did correctly are things they do not need to worry about for now. Knowing this helps students to focus on their growth areas.

Constructive feedback indicates to students what they need to work. It is not enough to tell students what is wrong. A teacher should also provide suggests on how to deal with the mistakes. The suggestions for improvement become the standard by which the student is judged in the future.

For example, if a student is writing an essay and is struggling with passive voice the teacher indicates what the problem is. After this, the teacher provides suggestions or even examples of switching from passive to active voice. Whenever the essay is submitted again the teacher looks for improvement in this particular area of the assignment.

Ways of Giving Feedback

Below are some ways to provide feedback to students

  • Comments-A common method. The teacher writes on the assignment the positive and constructive feedback. This can be used in almost any situation but can be very time-consuming.
  • Grades-This approach is most useful for a summative assessment or when students are submitting something for the final time. The grade indicates the level of mastery that the student has achieved.
  • Self-evaluation-Students judge themselves. This is best done through providing them with a rubric so that they evaluate their performance. Very useful for projects and saves the teacher a great deal of time
  • Peer-evaluation-Same as above except peers evaluate the student instead of himself or herself.

Mistakes are what students do. It is the teacher’s responsibility to turn mistakes into learning opportunities. This can happen through careful feedback the encourages growth and not discouragement.

Assessing Learning

Assessment is focused on determining a students’ progress as related to academics. In this post, we will examine several types of assessment common in education today. The types we will look at are

  • direct observation
  • Written responses
  • Oral responses
  • Rating by others

Direct Observation

Direct observation are instances in which a teacher watches a student to see if learning has occurred. For example, a parent that has instructed a child in how

to tie their shoe will watch the child doing this. When successful, as observed, the parent is assured that learning has occurred. If the child is not successful the parent knows to provide some form of intervention, such as reteaching, to help the child to have success.

Problems with direct observation include the issue of only being able to focus on what is seen. There is no way of knowing what is going on in the child’s mind. Another challenge is that just because the behavior is not observed does not mean that no learning has happened. Students can understand, at times, with being able to perform.

Written Response

Written response is the assessing of a student’s response in writing. These can take the form of test quizzes, homework, and more. The teacher reads the student’s response and determines if there is adequate evidence to indicate that learning has happened. Appropriate answers indicate evidence of learning

In terms of problems, written responses can be a problem for students who lack writing skills. This is especially true for ESL students. In addition, writing takes substantial thinking skills that some students may not possess.

Oral Responses

Oral responses involve a student responding verbally to a question or sharing their opinion. Again issues with language can be a barrier along with difficulties with expressing and articulating one’s opinion. Culturally, mean parts of the world do not encourage students to express themselves verbally. This puts some students at a disadvantage when this form of assessment is employed.

For teachers leading a discussion, it is often critical that they develop methods for rephrasing student comments as well as strategies for developing thinking skills through the use of questions.

Rating by Others

Rating by others can involve teachers, parents, administrators, peers, etc. These individuals assess the performance of a student and provide feedback. The advantages of this include having multiple perspectives on students progress. Every individual has their own biases but when several people assess such threats to validity are reduced.

Problems with rating by others includes finding people who have the time to come and watch a particular student. Another issue is training the raters to assess appropriately. As such, though this is an excellent method, it is often difficult to use.

Conclusion

The tools mentioned in this post are intended to help people new to teaching to see different options in assessment. When assessing students, multiple approaches are often the best. The provide a fuller picture of what the student can do. Therefore, when looking to assess students consider several different approaches to verify that learning has occurred.

Portfolio Assessment

One type of assessment that has been popular a long time is the portfolio. A portfolio is usually a collection of student work over a period of time. There are five common steps to developing student portfolios. These steps are

  1. Determine the purpose of the portfolio.
  2. Identify evidence of skill mastery to be in the portfolio.
  3. Decide who will develop the portfolio.
  4. Pick evidence to place in portfolio
  5. Create portfolio rubric

1. Determine the Purpose of the Portfolio

The student needs to understand the point of the portfolio experience. This helps in creating relevance for the student as well as enhancing the authenticity of the experience. Common reasons for developing portfolios includes the following…

  • assessing progress
  • assigning grade
  • communicating with parents

2. Identify Evidence of Skill Mastery

The teacher and the students need to determine what skills will the portfolio provide evidence for. Common skills that portfolios provide evidence for are the following

  • Complex thinking processes-The use of information such as essays
  • Products-Development of drawings, graphs, songs,
  • Social skills-Evidence of group work

3. Who will Develop the Portfolio

This step has to do with deciding on who will set the course for the overall development of the portfolio. At times, it is the student who has complete authority to determine what to include in a portfolio. At other times, it is the student and the teacher working together. Sometimes, even parents provide input into this process.

4. Pick the Evidence for the Portfolio

The evidence provide must support the skills mention in step two. Depending on who has the power to select evidence, they still may need support in determining if the evidence they selected is appropriate. Regardless, of the requirement, the student needs a sense of ownership in the portfolio.

5. Develop Portfolio Rubric

The teacher needs to develop a rubric for the purpose of grading the student. The teacher needs to explain what they want to see as well as what the various degrees of quality are.

Conclusion

Portfolios are a useful tool for helping students in assessing their own work. Such a project helps in developing a deeper understanding of what is happening in the classroom. Teachers need to determine for themselves when portfolios are appropriate for their students.

After the Exam: Grading Systems II

In this post, we conclude our discussion on grading systems by looking at less common approaches. There are at least three other approaches to grading. These systems are comparison with aptitude, comparison with effort, and comparison with improvement.

Comparison with Aptitude

In this approach, a student is compared with their own potential. In other words, the teacher grades the student on whether or not the student is reaching their full potential on an assignment as determined by the teacher. For example, if an average student does average work, they get an “A.” However, if an excellent student does average work they get a “C”.  To get an “A”, the excellent student must do excellent work as determined by the teacher.

The advantage of this system is everyone, regardless of ability, has a chance at earning high grades. However, the disadvantages are serious. The teacher gets to decide what potential a student has. If the teacher is wrong, weak students are pushed too hard, strong students may not be pushed hard enough, and or vice versa. This grading is also unfair to stronger students as weaker students earn the same grade for inferior work.

Comparison with Effort

This approach does not look at potential as much as it looks at how hard a student works. To receive a higher grade an average student must demonstrate a great deal of effort on a test. For the strong student, if they show little effort on an assessment they will receive a lower grade.

This system has the same advantages and disadvantages of the aptitude system. It is unfair to the stronger students to be held to a different standard in comparison to their peers. Also, it is hard to be objective when determining the amount of effort a student puts forth.

Comparison with Improvement

This system of grading looks at the progress a student makes over time to assign a grade. Students who improve the most will receive the highest grade. Students who show little improvement will not do so well.

This system is more objective than the previous two examples because it relies on data collected over time that is more than a teacher’s impression. However, one significant drawback is the student who does well from the beginning. If a student is strong from the beginning there will be little improvement. Committing to this grading system could hurt high-performing students.

Conclusion

Which system to use depends on the context and needs of your students. The number rule for grading is to maintain consistency within one assessment but it is perhaps okay to flexible from one assignment to the next.

After the Exam: Grading Systems

After the students submit their exams and they have been marked by you, it is time to determine the grades. This can actually be very controversial as there are different grading systems. In this discussion, we will look at two of the most common grading systems and examine their advantages and disadvantages. The grading systems discussed in this blog are comparison with other students and comparison with a standard.

Comparison with Students

Comparison with students is the process of comparing the results of one student with the results of another student. Another term for this is “grading on the curve.” For example, if a test is worth 100 points and the highest score is 85, the total points possible would be reduce to 85. The removal of 15 points raises the grade of all of the students significantly because the standard is the 85 of the highest performing student rather than the absolute value of 100.

Students, particularly the average and low performing ones, love this approach. The reason for this is that they get a boost in their grade without having to demonstrate any further evidence of proficiency in meeting the objectives. Teachers often appreciate this method as well, as it helps students and reduces the pressure of having to fail individuals or give students low grades.

A drawback to this approach is the pressure it places on high-performing students. The good students face pressure to not study as much in order to have a lower grade that benefits the group. Students also have a way of finding out who got the highest score and this can lead to social problems for stronger students.

One way to avoid the pressure on the top student is specify a percentage of students who will receive a certain grade. For example, the top 10% of students will receive an “A” the next 10% of students will receive a “B” and so on. This makes the top performers a group of students rather than an  individual. However, student performance becomes categorical rather than continuous, which some may claim is not accurate.

A question to ask yourself when determining the appropriateness of “grading on a curve” is the context of the subject. It may be okay for someone with an 85 to get an “A” in philosophy. However, do you want a heart doctor operating on you who earned an “A” by earning an 85 or a heart doctor who earned an “A” by scoring a 100? Sometimes this difference is significant.

Comparison with a Standard

Comparison with a standard is comparing students to a specific criteria such as the ABCDF system. Each letter is assigned a percentage out of a hundred and the grade is determined from this. For example, using a traditional grading scale, a student with a “94” would receive an A.

The advantage of this system is the objectivity of the grading system (marking is highly subjective, especially for essay items). Either student received an 94 or they did not. There is no subjective curve. Those who received a high grade truly earned it while those who received a low grade deserved it.

One problem is that different places can use different scales. For example, an “A” in many US Universities is normally 90% and above. However, an “A” in Thailand universities is set at only 80%. Both are seen as “excellent” students. This makes comparisons of students difficult. Using the doctor analogy, who do you want to perform heart surgery on you the 80% “A” doctor or the 90% “A” doctor?

Conclusion

In the next post, we will look at lesser known grading systems that will provide alternatives for teachers searching for ways to help their students. If you have any suggestion or ways of dealing with grading, please share this information in the comments.

Tips for Writing Excellent Essay Items

In the last post, there was a discussion on developing essay items. This post will provide ideas on when to use essay items, how to write essay items, and ways to mark essay items.

When to Use Essays

Here are several reasons to know when essays may be appropriate. Of Course, this is not an exhaustive list but it will provide a framework for you to make your own decision.

  • Class size–Even the most ambitious teacher does not want to read 50 essays. Keep in mind the size of the class when deciding if essay items work for you. Generally, classes under 20 can use long response or limited response, classes of 20-40 can use limited response, and above 40 maybe another form of assessment is best but it is your personal decision.
  • Cheating–Normally, it much more difficult for students to copy from one another when using essay items. Although I once caught my middle school students attempting to do this. Each answer for essay items must be unique, which is not possible with objective items.
  • Objectives–If your objectives are from the higher levels of Bloom’s Taxonomy essays are one way to assess if the students have met the objectives. However, sophisticated multiple choice can also do this as well.

How to Write Essay Items

One of clearest way to write essay items is to approach them the same way as writing objectives. This means that for the most part essay items should include.

  • an action (what they will do) such as explain, predict, organize, evaluate, etc.
  • a condition (the context)
  • Proficiency (criteria for grading) such as content, clarity, thinking, consistency, etc.

Below is an example

Within Southeast Asia, predict which country will have the strongest economic growth over the next 20 years. You will be assessed upon the clarity, content, organization, and depth of thinking of your response. Your response should be 1,000-1,500 words.

Here are the three components in parentheses

Within Southeast Asia (condition), predict which country will have the strongest economic growth over the next 20 years (action). You will be assessed upon the clarity, content, organization, and depth of thinking of your response (proficiency). Your response should be 1,000-1,500 words.

Here are some other tips

  • Define the task or action for the students. See the previous example
  • Avoid using optional items. This leads to students being evaluated based on different items which make comparison difficult from a statistical point. It is recommended that all students answer the same items for this reason.
  • Establish limits in words (see example above). This relates again to comparison. If one student writes 5,000 words and another writes 500, it is hard to compare since there was no standard set.
  • Make sure the essay item relates to your objectives. This happens by developing a test blueprint.

Marking Essay Items

The criteria for grading should be a part of the essay item and falls under the proficiency component. These same traits in the proficiency component should be a part of a rubric the teacher uses to mark the assignment. Rubrics help with grading consistently. The details of making rubrics are the topic of another post.

The ideas here are just an introduction to making essay items. There is always other and better ways to approach a problem. If you have other ideas please share in the comments section.

Developing Essay Items

Essay items are questions that require the student to supply and develop the correct answer. This is different from objective items in which the options are provided and the student selects from among them. Essay items focus upon higher level thinking in comparison to the lower level thinking focus of objective items. There are two common types of essay items and they are the long response essay and the limited response essay.

Long Response Essay

The long response essay is a complex essay of several or more paragraphs that addresses a challenging question that requires deep thinking. An example of a long response essay item is below.

Compare and contrast Ancient Egypt and Ancient Mesopotamia. Consider the geographic, economic, social, and military approaches. Your response will be graded upon accuracy, depth of thinking, organization, and clarity.

Such a question as the one above requires significant critical thinking in order to identify how these two nations were similar and how they were different. There are an infinite number of potential answers and approaches. A distinct trait of essay items is the potential for so many equally acceptable solutions. Success is determined by the quality of the response rather than in finding one correct answer.

Limited Response Essay

Limited response essays items require a student to recall information in an organized way in order to address a specific problem. The length of the response may be a paragraph or two and the answer does not have the same depth as long response. One reason the answers are shorter and simpler is because these types of questions may only address one issue per item. Long response essay items will deal with several issues in each item. Below is an example of a limited response item.

Explain two differences between Ancient Egypt and Ancient Mesopotamia. 

The answer to this question could easily be supplied in a short paragraph. The student list two differences and they should receive full credit. If you compare this item to the long response item you can see the difference in difficulty. One difference is there is no criteria on how the student will be graded. The assumptions are listing two differences is enough for full credit. Another difference is the expectations. The long response wanted several comparisons and contrasts while the limited response only required to short contrast.

In the next post, we will discuss when to use essay items, give suggestions for their development, and ideas for marking.

Writing Test Items for Exams with Power: Part III Multiple Choice Items

Multiple choice items are probably the most popular objective item that is used for tests. The advantage of multiple-choice items in comparison to true and false and matching is that multiple choice can assess higher levels of thinking. In other words, multiple choice items can go beyond recall and deal with matters such as application and justification

There are two components to a multiple choice item. The statement or question of the multiple choice item is called the stem. The answer choices are called options. There are usually four or five options per stem for multiple choice items.

Below are some tips for developing multiple choice items

Stem Clues

A stem clue is when the words in the stem are similar to the words in the options. This similarity serves as a signal for sharp students. Consider the example…

When the Israelites were in Canaan, which of the following was a threat to them?
A. Canaanites
B. Indians
C. Americans
D. Spanish

The word Canaan is in the stem and the word Canaanites is one of the options and most students would rightly guess it is the correct answer.

Grammatical Clues

Sometimes grammar can give the answer away. Take a look at the example.

Steve Jobs was an____________.
A. Lawyer
B. Doctor
C. Entrepreneur
D. Movie Star

The give away here is the indefinite article “an” in the stem. Only the option “entrepreneur” can be correct grammatically.

Unequal Option Lengths

The longest answer is often the correct answer for whatever reason. I do not think this requires an example.

Other Uses of Multiple Choice

Multiple choice can also be used for higher level thinking. For example, in mathematics a teacher writes a word problem and provides several options as potential answer. The student must calculate the correct answer on a separate piece of paper and then select the correct answer on the test.

For geography, a teacher can provide a map and have students answer multiple choice items about the map. Students must use the map to find the answers. These are just some of many ways that multiple choice items can go beyond recall

Tips and Conclusion

Here are some simple tips for improving multiple choice items

  • All wrongs answers should be believable and related to the question
  • Avoid negative questions as they are confusing to many students
  • Make sure there is only one correct answer
  • Rotate the position of the correct answer. Remember the most common answer is “C.” Therefore, force yourself not to use this option too often

There is much more that can be said about this topic. However, for those new to developing multiple choice items the information provided will serve as starting point for developing your own way of developing test items.

Testing with Power: How to Develop Great Test Items Part II

Today we will continue our discussing on developing excellent test items by looking at how to write matching items. Matching test items involved two columns. The side to the left has the descriptions (or they should) and the side to the right has terms. Below is an example

Directions: Column A contains descriptions of various famous basketball players. Column B contains the names of several famous basketball players. After examining both columns select the basketball who matches each description. Each answer can be used only once.

Column A                                                                     Column B

  1. Played for the Cleveland Cavaliers                A.        Michael Jordan
  2. Is Jewish                                                         B.         Tim Duncan
  3. Won six NBA championships                         C.        Lebron James
  4. Studied at Syracuse                                       D.        Carmelo Anthony
  5. Grew up in Italy                                              E.         Kobe Bryant                                                                                                                       F.         Amar’e Stoudamir
    G.        Dwayne Wade

I know the answers are scattered. Formatting is difficult in wordpress sometimes

This example has several strong points.

  • Homogeneity– All of the items have something in common in that they all are basketball players. The name of this is homogeneity. This makes it harder for the students to guess but makes it easier for them to remember what the correct item is because they are accessing information on one subject instead of several. A common mistake in developing matching items is to put disparate terms together which is confusing for learners.
  • Order of Columns– The descriptions should go on the left and the terms on the right. This is because the descriptions are longer and take more time to read. Read the long stuff first and then find the short answer in the right column second. Many people put the terms on the left and the descriptions on the right, which is detrimental to student performance. They read one short answer and have to shuffle through several long descriptions
  • More Terms than Descriptions– There should be more terms than descriptions in order to prevent guessing. This also helps to prevent students from losing two points instead of one. If the number of descriptions and terms are the same if a student gets one wrong they get two wrong because two answers will be in the wrong place. If there are extra terms this could be avoided.
  • One Description for One Term– There should be one correct item for each description. Anything else is confusing for many students.
  • Miscellaneous- Number descriptions and give letters to terms. Descriptions should be longer than the information in the terms column.

Developing matching items with these concepts in mind will help students to have success in the examinations you give them. Are there other strategies for matching? If so, please share in the comments section