Saturday, April 12, 2008

Assessing ESL Writers

Emily Watkins

The field of study for this research project is the assessment of writing proficiency of ESL students. I am particularly interested in the positive and negative implications of using writing as a form of evaluation for language proficiency. As a future English teacher and possible ESL teacher, I am interested in knowing how to best assess language proficiency through writing. I am also interested in whether written proficiency can best be assessed through pencil and paper tests, computerized test, or a student’s written work. Other teachers of ESL concerned with evaluating language proficiency through writing would also be interested in this research, as would researchers interested in the accuracy of different language proficiency tests. Initially, I gathered research on anything having to do with ESL students and their compositions, but this proved too broad. I narrowed the research gathered to scholarly articles and books on various types of language proficiency assessments and effective classroom evaluations; essentially, those articles and books that dealt with how to test and evaluate writing of ESL students. The majority of the research gathered is from the past decade, although some earlier sources are included that offer insight into traditional evaluation methods of the past as a means of comparison.


Alderson, J. Charles. “Diagnosing Foreign Language Proficiency.” Writing. New York: Continuum, 2005. 154-169. In the introduction to his book, Charles calls into question the ability of existing forms of assessment to evaluate second language proficiency. He points out that indirect tests, assessments that attempt to asses the writing process and other components of “writing ability” such as vocabulary and syntax use, are inadequate. He questions the validity of existing tests and suggests that we need better diagnoses of proficiency and suggest the computerized diagnostic test, DIALANG, as a solution. In his chapter “writing” he discusses the results of assessing writing using DIALANG. His research includes using sample test items from DIALANG, and explores how second language learners who completed the English writing tests performed and the relationship between their performance with certain background variables. Although Charles’ empirical evidence apparently supports his claims, as a reader unfamiliar with DIALANG and the charts he used, understanding the data was difficult. Teachers and researchers interested in computerized assessments of language writing proficiency would be very interested in this chapter and Charles’ research.

Genesee, Fred, and Upshur, John. “Classroom-Based Evaluation in Second Language Education.” Journals, questionnaires, and interviews. Ed. Jack Richards. Cambridge: Cambridge University Press, 1996. 118-138. In this chapter of their book, Genesee and Upshur provide alternate forms of evaluation for second language learners. The authors do not prescribe one particular form of evaluation, as they claim that one form is not transferable to all situations and specific needs. This chapter provides evaluation through writing, rather than testing. The authors suggest using journals as evaluation purposes, claiming that journals serve as interactive conversations, force a student to take ownership of learning, monitor proficiency and can be used to keep record of student progress. They also claim that journaling provides insight into the student’s linguistic, cultural, and educational backgrounds and experiences, as well as their attitudes and goals for any language level. This chapter is highly helpful for ESL teachers looking for alternate means of assessment and interested in the benefits of expressive writing. This chapter is highly practical for teachers, as it provides examples and scenarios from the classroom that an ESL teacher is likely to encounter.

Hamp-Lyons, Liz. "Rating Nonnative Writing: the Trouble with Holistic Scoring." TESOL Quarterly 29.4 (1995): 759-762. 2 Apr. 2008 . Hamp-Lyons, an associate professor at the University of Colorado specializing in assessing college and adult writing and in ESL pedagogy, provides valuable insight into assessing the written work of ESL students. Hamp-Lyons argues that holistic assessment, or simply designating a number or letter grade for student writing, is insufficient. In holistic assessment, diagnostic feedback is impossible and thus provides no help to students. Hamp-Lyons also claims essay testing has significant limitations and that the use of multiple trait assessment (MTA) is much more appropriate, particularly as it takes into account the differences of ESL learners, where holistic scoring and essay tests do not. In MTA, a team of graders develop three to six criteria that ESL learners are to be graded on. Teachers looking for a “fair” and adequate form of evaluation for ESL students would take interest in Hamp-Lyon’s research. Although the author’s suggestion of using a MTA is helpful, it would be most beneficial if she provided examples of helpful criteria on a MTA that ESL teachers could include in their grading rubrics. Her article is barely three pages long, and she doesn’t go to great lengths to explain the components of MTA.

Harris, David P. “Testing English as a Second Language.” Testing Writing. New York: McGraw-Hill Inc, 1969. 68-80. Harris outlines the positive and negative aspects of using composition as a form of assessment in ESL students. In his chapter on testing writing, he outlines the (then) critical views of writing tests: supporters argue that writing tests force students to organize their thoughts, help them relate their ideas, and provides motivation for students to improve their writing. For the teacher, supporters also argue that writing assessments are easier and quicker to prepare. Critics of writing as a form of assessment argue that there is little reliability in composition as a form of assessment, as students perform differently on different tasks and on different occasions, scoring is subjective, and that students can cover up weaknesses by avoiding certain grammar they’re unfamiliar with. For the teacher, these types of assessment require more scoring time. Harris suggests that writing tests can be made to be more reliable by giving clear writing tasks and taking multiple writing samples into consideration. In terms of credibility, the book is nearly 40 years old and countless more research has been conducted since then on the positives and negatives of assessing writing of ESL students. Harris also makes certain claims seemingly unsupported by any empirical evidence such as, “We must avoid setting tasks that require a high degree of ingenuity and creativity” (77). This book is useful to teachers of composition and researchers interested in older traditions and views on assessment of writing as a means of comparison, although it still contains helpful pros and cons of using composition as an assessment.

McNamara, Tim. “Second Language Performance Assessment.” Measuring Second Language Performance. Ed. C.N. Candlin. New York: Addison Wesley Longman Limited, 1996. 6-47. In this chapter of his book, McNamara analyzes second language performance assessments. McNamara outlines traditions of this type of assessment and claims empirical evidence supporting the validity of second language performance tests is lacking. This chapter outlines differences between second language performance tests and traditional pencil and paper tests, as language is both a medium of performance and target of assessment. This chapter provides definitions based on prior research and cites various researchers of language assessments and is helpful to ESL teachers and learners who are unfamiliar with testing methods and who are interested in traditional performance assessment and learning about measurement techniques. Although this chapter is helpful as an introduction to performance assessments, it includes detailed information on the validity of occupational testing for second language learners that is irrelevant for teachers and students concerned with classroom language assessments. Although the author claims little prior understanding of theories of measurement is needed, the jargon used in this chapter may be overwhelming to a reader unfamiliar with validity, measurement, etc.

Perkins, Kyle. "On the Use of Composition Scoring Techniques, Objective Measures, and Objective Tests to Evaluate ESL Writing Ability." TESOL Quarterly 17 (1983): 651-671. 2 Apr. 2008 . Perkins, a linguistics professor, explains the importance of accurate evaluation as teachers often hope to use the results to help improve writing. In the article, Perkins outlines the effectiveness of different assessment methods including holistic, analytical, primary trait, and standardized tests. After highlighting strengths and weaknesses of these tests, Perkins claims no type of evaluation is suitable for all needs, and that no test or composition scoring procedure is flawless: all tests can produce unreliable or invalid results. When grading compositions, no matter what the method, Perkins claims it is crucial to both assign a grade on written work and provide comments on student papers. ESL teachers interested in the positive and negative aspects of different kinds of assessment would be particularly interested in this article. Although the author points out many pros and cons of different evaluations, he doesn’t take a definite stance to support any method.

Rollins Hurley, Sandra, and Villamil Tinajero, Josefina. “Literacy Assessment of Second Language Learners.” An Integrated Approach to the Teaching and Assessment of Language Arts. Ed. Paul Smith. Needham Heights: Allyn and Bacon, 2001. 13-16. In this chapter of their book, Hurley and Villamil Tinajero address means of assessing the written work of ESL students. They claim that the use of portfolios is beneficial as it provides evidence of student progress and the developing knowledge and craft of writers. The authors also caution that before assessment is given, ESL students should be exposed to various types of “great writing” and given effective instruction (14). They also suggest that students complete rough drafts, and that these drafts should not be graded and that only final versions should be measured. As student writing gets more sophisticated, writing should be assessed holistically using rubrics, and be given detailed feedback. The authors also claim that explaining the writing task is crucial, and it should be discussed with the student so they might understand what they will be graded on. Their research methods include several weeks of observing Mr. Sierra, a first grade bilingual teacher that has taught for 11 years in a partnership school of the University of Texas. This is helpful for ESL teachers who are interested in not only writing as a form of assessment but also how their assessment can be broken down into rubrics and how to best communicate writing task expectations to students. Although the authors claim students should be exposed to “great writing” they never clearly define what they consider great writing examples for ESL students.

No comments: